Progress in structured compressed sensing research


Reference format Liu Fang, Wu Jiao, Yang Shuyuan, Jiao Licheng. Progress in structured compression sensing research. Journal of Automation, 2013, 39(12): 1980-1995 DOI Section Innovation Initiative (111 Program) (B07048), Education Minister Jiang Scholar and Innovation Team Development Program (IRT1170), National Ministry of Education Doctoral Fund (20110203110006), Smart The Ministry of Education focuses on perception and image understanding. The three basic modules of compressed observation, sparse representation and signal optimization and reconstruction are the three important directions of CS theory research. The sparseness of the signal is a necessary condition for CS. The unrelated observation is the key to CS. The nonlinear optimization is the means for CS to reconstruct the signal.
Low-dimensional structural models of traditional Compressed Sensing Framework 2 signals In general, models containing prior knowledge are useful for finding or processing signals of interest to us, and the signals we study often have different potentials. Low-dimensional structures, that is, the degrees of freedom of high-dimensional signals are usually much lower than the dimensions of the signal. In recent years, research on low-dimensional structural models of signals has appeared in many fields. This section describes several signal structure models commonly used in CS.
2.1 Sparse Signal Model Sparse signal model is the simplest model commonly used in signal processing. Traditional CS theory is built on it. From the definition of mathematics, when the signal /eRw has a non-zero term in a transform coefficient under a certain base or dictionary, that is, ||||.=fc(fc "W), called / is fc-sparse . Sparseness reflects the fact that in many cases a high-dimensional signal actually contains only a small amount of information far below its dimension. Most of the signals in the actual scene are not exactly sparse, but can be well approximated by fc-sparse signals, which are often said to be compressible. For the sparse signal eRw, the set of all fc-sparse signals is recorded as described above. The reconstructed signal in the compressed measurement is an underdetermined problem. Many researchers have studied this theory and solving algorithms in the past decade. The theory shows that under the sparse model constraint, when the minimum number of sparks in the observed or medium linear correlation is greater than the spark, the number of measurements guaranteed by the spark property is 2fc.
2.3 Subspace Joint Model This structured model can be generalized to infinite dimensional space. For W-dimensional fc-sparse signals with certain structures, it may be necessary to limit the support of the signal to a smaller subset of the signals to characterize the structure of the signal. For example, when the non-zero coefficients of a signal appear in some form of aggregation, the structure of the signal can be characterized by the Unions subspaces model. The subspace joint model of the signal is an extension of the sparse model and can be used to characterize more types of signals including finite and infinite dimensions.
In the subspace joint model, if it is already known that "a subspace in the L possible subspaces R, ..., then - is positioned in the sum of the L subspaces, ie, Ui (1SzS magic is Rw The fc-dimensional subspace in space corresponds to a particular set of locations of fc non-zero coefficients in *. Compared with a set containing all possible W-dimensional fc-sparse signals (consisting of subspaces), L Often much smaller than C|. There is currently no uniform way to deal with all joint models, and researchers have made relevant theoretical and applied research on signal sampling and recovery problems under some special types of subspace joint models. The simple joint model is a finite number of subspaces (Finiteunionofsubspaces, FUS) model, in which the number and dimension of subspaces are limited.
A special case-structured sparse support (Structuredsparsesupports) model using the FUS model. The model takes advantage of the extra information supported, such as the position of the non-zero elements of the vector, so that U is only a part of it. A typical knotlet wavelet coefficient (a)--value-valued wavelet tree structure (b) two-dimensional image wavelet quadtree structure signal/image wavelet tree structure sparse support model is tree structure support (Tree-structuredsupports )model. Smooth wavelet bases are smooth and segmentally smooth signals, including natural images, providing sparse or compressible representations, and the wavelet coefficients of these signals and images naturally form a tree-like structure with large values ​​of coefficients along the tree The branches are gathered as shown. It is therefore only necessary to use a union consisting of subspaces corresponding to the tree structure to represent the signal.
Another special case of the FUS model is the sparse sum and (Sparsesumsofsubspaces) model, in which each subspace forming a union is a direct sum of low-dimensional subspaces: where WZl,..., is given Subspace collection, dim(W~)=Must. Therefore, the different subspaces Ui correspond to the sum of the different subspaces taken out from the L subspaces. When dim(Wi,)=1, the model degenerates into a standard sparse model. Thus, a block sparsity (Blocksparsity) model can be obtained, that is, some blocks in one vector are equal to zero, and other parts are not zero. Give an example of a block sparse vector. The vector is divided into 5 blocks, where the shaded area represents 10 non-zero elements of the vector, which occupy 2 blocks, and must represent the number of elements contained in the Zth block. When all Z must be 1, the block sparsity degenerates to standard sparsity. A lot of research has been done on the properties of block sparse models in the field of statistics. In addition, block sparse models are also used in applications such as DNA microarray analysis, sparse communication channel equalization, and source localization.
The block sparse vector pair FUS model extends the standard RIP property in the traditional CS to the (U,5)-RIP property, which proves that the reconstruction algorithm designed for the FUS model can correctly restore sparseness when the constant 5 is small enough. The vector and gives the number of measurements needed to ensure a stable recovery. The correlation (Coherence) is generalized under the sparseness and model of the subspace, and the block-coherence of the matrix is ​​defined. The internal structure of the subspace, such as the sparseness of the subspace, is added, which is equivalent to adding a regular term representing sparsity to the optimization of a single block, thereby obtaining a multi-layered structure sparse mode, which has been successfully applied to Source identification and separation issues.
The subspace joint model with the above-mentioned dimensions and numbers is mainly dependent on the discretization of the analog input, without considering the actual hardware system. In order to truly achieve low-speed sampling and reconstruction of structured analog signals on hardware, a more complex subspace joint model has emerged. The joint model of these subspaces includes a model with a limited number of subspaces and an infinite number of subspaces, a model with a finite number of subspaces and an infinite number of dimensions, and a model with infinite dimensions and numbers of subspaces.
Since the low-speed sampling of the analog signal represented by the joint subspace is used, the method used to solve the same problem is substantially different from the method used for discretizing the signal in the joint model of the finite subspace described above. The two main frameworks for dealing with under-Nyquist sampling problems for analog signals are Xampling and Finite-rate of Innovation (FRI). The Xampling framework primarily deals with analog signals that can be represented as a finite number of infinite dimensional subspaces, such as multi-band models. In this model, the analog signal consists of a finite sum of band-limited signals, which typically have a relatively small bandwidth but are distributed over a relatively large frequency range. Another type of signal that can be represented by the sum of subspaces is a type of signal with a finite update rate. Depending on the particular structure, this model corresponds to an infinite or finite number of finite-dimensional subspaces and can depict many signals with low degrees of freedom. In this case, each subspace corresponds to a certain choice of parameter values, and the set of possible values ​​of the parameters is infinitely dimensioned, so that the number of subspaces formed by the model is also infinite. This simulation of subspaces allows us to sample and process real-time analog signals at low rates and design efficient hardware such as modulators, low-rate analog-to-digital converters (Analog-to-digital) Standard analog design components such as converter, ADC) and low-pass filtering implement analog front ends to facilitate the development of analog CS frameworks from theory to practical applications.
2.4 Low Rank Matrix Model The sparsity of matrix is ​​mainly manifested in two aspects: 1) the sparsity of matrix elements, that is, the matrix has few non-zero elements; 2) the sparsity of matrix singular values, ie the matrix has few non- The zero singular value, that is to say the rank of the matrix is ​​very small, then we call the matrix a low rank matrix.
For the matrix XeRW1XW2, the set of low rank matrices can be expressed as the singular value decomposition of matrix X to X = low-rank matrix reconstruction has become a hotspot in the fields of machine learning, signal processing, computer vision, etc. in recent years, matrix recovery and filling can be seen It is the generalization of CS reconstruction from one-dimensional signals to two-dimensional matrices.
Under the low rank matrix constraint, the matrix filling problem is expressed as the identification set of the known elements in the matrix X with Q as the missing element, and Pn(X) is defined as if (i,j)eQ is the most recent, some consider the matrix elements at the same time. A low rank matrix model with sparsity of matrix singular values ​​is used for matrix recovery problems: where A > 0 is a regular parameter and | | | is a regular strategy. Model (11) is commonly referred to as Robust Principle Component Analysis (RPCA). On the basis of RPCA, the problem of low rank representation of low rank plus sparse matrix decomposition is proposed. The LRR model is represented as where DeRNlXn is a dictionary of linearly expanded data spaces, and n is the number of atoms in the dictionary. Similar to the Z.-minimization problem of CS, instead of rank(Z), the above problem is transformed into a convex optimization problem.
3Structural Compressed Sensing Traditional CS only uses sparsity as the only a priori information in signal acquisition and reconstruction, while structured CS introduces structural priors in three basic modules of traditional CS, namely structured observation, Structured dictionaries and structured signal reconstruction. The theoretical framework of structured CS is shown in the figure. It can be seen that structured CS is based on structural sparse representation and uses structured observations matched with signals. Under structured prior, it is more effective for a wider range of signal classes. Refactoring. Next, we will introduce the three basic problems of structured CS theory in detail in combination with various low-dimensional structural models of the signals given in the previous section.
Structured Compressed Sensing Framework 3.1 Structured Observation Matrix To ensure RIP and irrelevance with high probability when reconstructing signals from low-dimensional measurement y, random observation matrix will lead to high complexity when signal dimension is high The problem is not easy to achieve.
In some specific applications, the type of observation matrix is ​​usually limited by the sensor's sensing mode and capability. To reduce the number of measurements and to sample the analog signal, we also want the observation matrix to match the signal. Thus, compared to traditional CS, structured CS uses a structured observation matrix that matches the signal structure or sensor perception mode. At present, the structured observation matrix mainly includes Subsampledincoherent bases, Structured subsampled matrices, Subsampledcirculantmatrices and Separablematrices. A detailed overview of the theory and hardware implementation of structured observation matrices can be found.
Sampling with undersampled uncorrelated bases is achieved by first arbitrarily selecting an orthogonal basis that is not related to the sparse basis and then selecting a subset of the coefficients of the signal at this orthogonal basis. There are two main types of applications for undersampling uncorrelated bases. In the first type of application, the acquisition hardware is limited to obtaining measurements directly in the transform domain, the most common examples being NMRI, tomography, and optical microscopy. In these applications, the measurements obtained from the hardware correspond to the two-dimensional continuous Fourier transform coefficients of the image, showing an example of NMRI sampling and CS reconstruction. The second type of application is to design an available signal in a vector. A new acquisition device that projects the projections, such as a single-pixel camera (as shown), can obtain a projection of the image on a vector set with binary elements.
In addition, this type of structured observation matrix has been used to design acquisition of periodic multi-frequency analog signals. The design of the acquisition device is called random sampling ADC. MRI in some applications, the measurement obtained by the acquisition device It cannot directly correspond to the coefficient of the signal under a particular transformation. The obtained observation is a linear combination of multiple signal coefficients. The CS observation matrix generated in this case is called a structured under-sampling matrix. The structured undersampling matrix is ​​used to design a compressed information acquisition device (as shown) that collects periodic multi-frequency analog signals. A wider frequency sparse signal can be sampled using this framework and an improved recovery algorithm.
Single-pixel camera imaging analog filter / (>) pseudo-random sequence from (e) compressed ADC sampling model loop structure is used for CS observation matrix. It was first seen in signal estimation and multi-user detection in the field of communications, where signal-response and multi-user modes are to be evaluated for sparse priors, and these signals are convolved with the impulse response of the sampling hardware prior to measurement. Since the convolution is equivalent to the product operator of the Fourier transform domain, multiplication with the fast Fourier transform accelerates the CS recovery process.
For multi-dimensional signals, the separable matrix provides a computationally efficient compression measurement method, such as hypercube sampling from multidimensional data. These matrices have a mathematical form that is as concise as the Kronecker product, and the correlation between matrix sub-blocks reflects a significant feature structure. The separable CS observation matrix is ​​mainly used for multidimensional signals such as video sequences and hyperspectral cubic data. Single-pixel cameras have been promoted as single-pixel hyperspectral cameras using separable observation matrices.
3.2 Structured sparse representation The sparse representation of the signal is the premise of CS theory application. Choosing the appropriate dictionary pill makes the signal have higher sparsity under the armpit, which can improve the signal perception efficiency. Studies by Candes et al. and Donoho have shown that using only K2cfclog (W/fc) independent and identically distributed Gaussian measurements can accurately reconstruct an fc-sparse signal or a compressible signal that closely approximates a sparse signal with a high probability. It can be seen that the higher the sparsity of the signal under the dictionary 屯, the less the number of observations needed to accurately reconstruct the signal. Therefore, in CS, it is sought to use or design a dictionary that can obtain a high sparsity representation of the signal.
There are usually two ways to construct a dictionary: 1) an analytical method based on a mathematical tool to construct a dictionary, a constructed dictionary is a fixed dictionary; 2) a learning-based method to learn to construct a dictionary that matches specific signal data. Traditional CS uses a fixed dictionary. For signals with complex structures, such a dictionary is not flexible enough to obtain sufficient sparsity. The structured CS enriches the content of the dictionary by concatenating the dictionary and the dictionary with the specific structure to realize the adaptive structured sparse representation of the signal.
3.2.1 Fixed Dictionary Orthogonal Dictionary is a fixed form of dictionary used by traditional CS in the early days, usually described by their algorithms, such as the standard orthogonal dictionary obtained by Fourier transform, discrete cosine transform, wavelet transform, etc. These transformations are characterized by simple construction, fast implementation, and low complexity of representation. When the signal characteristics are consistent with the atomic features of the dictionary, an efficient and accurate representation can be obtained. However, for complex signals such as images, the orthogonal dictionary cannot be flexibly represented to obtain sufficient sparsity. Numerous studies have shown that the super-complete redundant dictionary can express signals more flexibly and obtain higher sparsity, including Curvelets, Contourlets and Bandelets. In the field of CS, Candes et al. theoretically proved that under certain conditions, it is highly super Sparse signals under a complete redundancy dictionary can be accurately reconstructed.
Signals in the real world have complex structures that can be thought of as composed of components of various structural types, such as transients and invariants in audio signals, edges, textures, and smooth portions in natural images. Each of these structural types is completely different, and neither one can effectively represent the other. Such a mixture of different structures can be effectively represented by an orthogonal base cascade dictionary. When the coherence coefficient of a dictionary concatenated by orthogonal bases is "=1/, the concatenated dictionary is considered to be (completely) incoherent, and the sparse representation of the signal satisfies the exact reconstruction condition. Common orthogonality The base cascade dictionary has an orthogonal base cascade dictionary composed of different orthogonal wavelet bases, an orthogonal base cascade dictionary composed of a wavelet function and a Curvelet function, etc. Gribonval et al. give arbitrary (redundancy) signals in a finite dimension. A condition with a unique sparse representation under the dictionary indicating that a concatenated dictionary composed of non-orthogonal dictionaries, such as a bi-orthogonal wavelet-based concatenated dictionary, also has good performance in sparse representations of signals containing multiple structures. The cascading method enriches the content of the dictionary, so that each structure in the signal can be sparsely represented under the corresponding dictionary, but the application of the cascading dictionary also requires the characteristics of the signal to be consistent with the dictionary characteristics, otherwise It is difficult to get a satisfactory expression.
3.2.2 Structured Dictionary Learning The above dictionary is fixed. Its atomic type does not change once it is determined. When selecting a dictionary in CS, it is more or less necessary to know the a priori information of the signal, and when the signal of the study changes, The dictionary used may not be suitable. Thus, an adaptive structured dictionary learning method for obtaining an optimal sparse representation of the signal is obtained, which is to learn the dictionary from a large number of training sample sets. The mathematical model of dictionary learning is as follows: where matrix FeRWxi is the training sample set, fi is the ith column of F, matrix 屯eRWxM is the unknown dictionary, matrix XeRMxi is a sparse matrix, and each column vector of X corresponds to each of F A sparse representation of the column vector under the dictionary.
The dictionary learning problem (13) is a non-convex combinatorial optimization problem. The classical algorithms for solving include the MOD (Methodofoptimal direc- tions) algorithm and the K-SVD (K-sigular value decomposition) algorithm. The MOD algorithm alternately performs sparse coding and dictionary updating. In the sparse coding step, the algorithm fixes the dictionary and performs sparse coding on each signal independently; in the dictionary update step, the algorithm updates the dictionary by solving the analytical solution of the quadratic optimization problem (13). The MOD algorithm only needs a few iterations to converge. Although the inverse matrix operation makes the algorithm have higher complexity, MOD is a very effective method in general. The K-SVD algorithm uses a different dictionary update rule than MOD to update the atoms in the dictionary (ie, the column vector of the dictionary) one by one, and updates the atom of the current iteration step and the corresponding sparse coefficient, K-SVD. The algorithm is more efficient. Compared with the parsing method of constructing a dictionary, the above dictionary learning algorithm can obtain a more efficient dictionary, and can obtain better performance in practical applications.
There are many studies on structured dictionary learning methods, that is, adding structural information between dictionary elements in learning to obtain a structured sparse representation of signals. Training a dictionary learning algorithm that is concatenated by é…‰ matrices is the first attempt to learn a structured overcomplete dictionary. This structure ensures that the trained dictionary is a tight framework and reduces the computational complexity of dictionary learning.
The algorithm assumes that a dictionary cascaded by L 酉 matrices 学习 = effectively performs sparse representation; in the dictionary update step, the algorithm alternately updates the L matrices iteratively. Due to the relatively strict model used, this method does not represent a very flexible structure in practice. The double sparsity (Doublesparsity) dictionary learning method is a method of dictionary learning using a sparse model of a well-known dictionary under a known dictionary. Under this structure, each atom of the trained dictionary can be represented by a sparse combination of atoms of a given dictionary. On the one hand, the method can construct the dictionary adaptively, and on the other hand, it can effectively improve the efficiency of dictionary learning. ISD (Imagesignaturedictionary) is a transform-invariant dictionary with sub-image blocks as atoms. The method describes invariance in the form of blocks, requires few training samples, and the dictionary learning process can converge quickly. Under this structure, it is possible to train a dictionary with atoms of different sizes to find the optimal block for a given sample set. A dictionary learning method for sparse representation. The method uses the block structure of the dictionary as an unknown prior, and deduces the block structure by data to adjust the dictionary. The proposed BK-SVD (Block K-SVD) algorithm iteratively exchanges updates to the block structure and the dictionary. In the block structure update step, the atoms are merged step by step according to the similarity of the sample sets represented by the dictionary atoms; in the dictionary update step, a generalized form of the K-SVD algorithm is used, and the sparse dictionary is obtained by sequentially updating the dictionary atoms. When the size of the block is 1, the K-SVD algorithm is used. The BK-SVD algorithm does not consider structural information between atoms. Li et al. added a partial graph model of the atom in the block based on BK-SVD, and then combined the block sparse constraint and the graph regular term to obtain the dictionary learning model. Finally, the model was solved by alternately updating the block sparse coding and dictionary. The dictionary not only guarantees block sparsity but also maintains local geometry between atoms, and can effectively reduce the correlation between dictionary blocks. In addition, Jenatton et al. propose a hierarchical dictionary learning method based on tree structure sparse regularization. This method adds a tree structure hierarchical model reflecting the correlation between dictionary elements in the dictionary learning. The original-dual method is used to calculate the neighboring operator of sparse regularization of tree structure, and the sparse decomposition problem of signal tree structure is solved by accelerated gradient method. In recent years, some domestic scholars have also used and constructed some new structural dictionary learning methods in such problems as image denoising and repair, such as signal sparse representation based on tree redundancy dictionary and dictionary based on image block local similar clustering. Learning methods, dictionary learning methods based on non-local joint sparse approximation, and adaptive multi-component sparse representations of image structures.
3.3 Structured Signal Reconstruction 3.3.1 Traditional CS Reconstruction Based on Standard Sparse A priori Under the constraints of the sparse model, the traditional CS reconstruction problem can be expressed as follows. - Norm non-convex optimization problem: Solve the above 丨. The most primitive method of the norm optimization problem is to search for the most sparse solution vector consistent with linear measurements. For W-dimensional fc-sparse signals, an exhaustive feasible solution is needed, making the problem NP-hard. For this reason, many feasible algorithms are available to replace Z.-norm optimization.
A common method is to replace 丨 with 丨1 norm. Norm, the problem is transformed into the Zi-norm minimization (convex optimization), so that the following theoretical research shows that under certain conditions, the 丨1-minimization problem and the /- problem are equivalent, by 〖1 - Convex optimization also yields the most sparse solution. Solve - problem of purchasingalgorithm, BEPA) and so on. Based on the multi-layer sparse a priori hypothesis, a fast Bayesian CS (BCS) algorithm can be obtained by the sparse Bayesian learning (SBL) method. A detailed review of traditional CS reconstruction algorithms can be found.
The feasible solution space of the traditional CS inverse problem based on the standard sparse model increases exponentially with the increase of the signal dimension, so the choice of feasible solution has sufficient freedom. In practical applications, this excessive freedom usually leads to instability of the algorithm and inability to obtain accurate estimates. In order to overcome this problem, the structured CS introduces the structural model of the signal, and uses it as the a priori information of the feasible solution of the CS inverse problem to constrain the feasible solution space. Compared with the traditional CS, the structured CS effectively reduces the number of compression measurements, improves the reconstruction quality, and extends the compression sensing process of the finite-dimensional signal to the processing of the infinite-dimensional signal. In the following, we will introduce the MMV model based on the finite-dimensional signal, the subspace joint model and the structured reconstruction based on the priori regularization method.
3.3.2 The optimization problem of structured CS reconstruction (or SMV) based on MMV model (14) can be generalized to obtain the following optimization problem of MMV sparse recovery: using matrix norm, the elements in the matrix are assumed to be independent and identically distributed. This assumption is not appropriate in many practical scenarios. For example, at high sampling rates, the amplitude of successive samples of the obtained signal source has a strong correlation. The algorithm proposed by Zhang et al. and Cho et al. learns this spatio-temporal structure in sparse recovery by establishing an autoregressive (AR) model of the signal source, although the performance of the MMV algorithm superior to the above-mentioned unconsidered spatio-temporal structure is obtained, but it is too low. The efficiency limits their application. Zhang et al. proposed a sparse Bayesian learning (TMSBL) algorithm based on multiple measurement vectors of spatio-temporal correlation structures of signal sources, which improves the joint quality and efficiency of joint sparse reconstruction of multiple signals. In addition, Zhang et al. 125 improved the iterative repetitive weighting algorithm M-FOCUSS, and proposed an iterative repetitive weighting algorithm tMFOCUSS.Wu based on the spatio-temporal correlation structure of the signal source. By designing a multi-variable CS sampling method based on multi-scale CS, the wavelet will be applied. The CS reconstruction problem of the domain image is transformed into the MMV problem. The aggregation of image wavelet coefficients makes the coefficients in the neighborhood of the same scale have significant statistical correlation structure and instant space correlation structure. In the Bayesian framework, they use the multivariate scale hybrid model to model the correlation structure in the wavelet coefficient scale, and combine it with the reconstruction of CS to propose a fast multivariate tracking algorithm (MPA) for solving the MMV problem.
A more detailed review of CS reconstruction based on the MMV model can be found.
3.3.3 Structured CS Reconstruction Based on Subspace Joint Model The recent tree structure model is used in some reconstruction algorithms and achieves better performance than traditional CS reconstruction. The model-based CoSaMP (Model-based MP, TMP) algorithm proposed by Bara-niuk et al. and the tree-based OMP (TOMP) algorithm of La et al. add a tree structure model extension based on the traditional greedy algorithm. owned. Compared with the standard greedy algorithm, these algorithms use the tree structure to determine the signal support, reduce the search range of the algorithm, and effectively improve the sparseness of the reconstructed signal. Lian Qiusheng et al. improved the above algorithm, proposed a convex set alternating projection algorithm based on the double-tree wavelet HMT (HiddenMarkovtree) model and an iterative hard threshold algorithm based on the wavelet tree rational tree structure model. Tree-structured waveletlet (TSW-CS) algorithm based on wavelet HMT model proposed by Ugly et al. uses wavelet HMT structure to obtain the probability estimation of image wavelet coefficients through Bayesian learning. The HMT-based weights foriterative reweiyhted (HMT+IRWL1) algorithm based on HMT weighting method proposed by Duarte et al. uses the HMT model of wavelet coefficients to construct a weighting method, which effectively increases the sparsity of reconstruction coefficients and improves reconstruction. Precision. Zhao Wei et al. improved the HMT+IRWL1 algorithm using a 4-state wavelet HMT model. In addition to the above algorithms, there are some structured probabilistic prioritized belief propagation and message passing algorithms that use Markov chains, Markov fields or Markov trees as signals, multi-task CS based on Bayesian multi-layer model and learning. problem. In statistics, when there is a group or block structure correlation between sparse coefficients, Yuan et al. generalize the Lasso algorithm to GroupLasso.Ja-cob et al. and Jenatton, etc. by adding other types of structural sparsity to the model, and promoting GroupLasso as A situation with more complicated sparse regularization conditions. Eldar et al. regard the reconstruction of block sparse signal as a hybrid 丨2/丨1 norm optimization problem, which is solved by convex optimization method. The proposed algorithm can be regarded as BP algorithm in block sparse signal reconstruction. Promotion. Eldar et al. extended the MP and OMP algorithms to block sparse matching tracking (BlockMP) and block sparse orthogonal matching tracking (BlockOMP) algorithms. In addition, Fu Ning et al proposed a block sparsity adaptive iterative algorithm and a subspace-based block sparse signal CS reconstruction algorithm.
3.3.4 Structured CS Reconstruction Based on Transcendental Regularity In addition to the above two types of reconstruction algorithms using signal structure models, there is also a class of structured CS reconstruction algorithms based on a priori rules. This type of method is mostly used for image reconstruction. The structural a priori used is derived from the image itself, for example, the edges and texture of the image, the neighborhood structure information of the image pixels, and the non-local similarity of the image sub-blocks, etc., and often iteratively. The method adaptively learns the parameters of the structure priori regular model and synchronously realizes the image recovery.
Wu et al. added the edge information of the image to the MMV sparse reconstruction process, and proposed an MPA algorithm based on edge guidance. The algorithm can obtain high quality reconstruction for images with strong edges. Wu et al. proposed an autoregressive model based reconstruction algorithm based on the autoregressive model between pixels in the spatial domain. The algorithm can significantly improve the recovery of detailed information such as image edges and textures.
On this basis, they further improve the performance of the image reconstruction algorithm based on autoregressive model by adding the non-local similarity of the image to the model.卩6丫 et al., Zhang et al. and Chen Shuzheng proposed an image reconstruction algorithm based on iterative non-local regularization. The algorithm alternately iterates between image reconstruction and learning the optimal non-local graph regularity matching the image structure. Reconstruct the edges and textures of natural images.最优6丫 et al. The optimal base compression perception is obtained by learning the tree structure dictionary to obtain the optimal orthogonal basis, thus obtaining the image adaptive regular prior model and adaptive reconstruction. Duarte-Carvajalino et al. proposed an image restoration algorithm for synchronous learning adaptive a priori structure and observation matrix. Yu et al. proposed an adaptive sparse domain selection (Adaptivesparsedomainselection) and an adaptive regularized dictionary learning algorithm to solve the image restoration problem. The algorithm clusters the image blocks, and uses the PCA method to learn the sub-dictionaries for each cluster. The constructed dictionary can A good representation of the structure of the image. Zhou et al. proposed a nonparametric multi-layer Bayesian model dictionary learning method for image restoration. Under this model, you do not need to know any prior knowledge of the training samples, so you can adaptively obtain a low image of the dictionary elements in the learning. Sparse representation under the dimension subset, and in the learning can easily combine the structural model of the image, such as cluster structure, spatial structure, etc., with the multi-layer Bayesian model in the form of stochastic process to improve the performance of CS reconstruction. .
4 Summary and Prospects This paper elaborates on the basic models and key technologies involved in structured compressed sensing from three basic aspects of compressed sensing theory, and summarizes the latest research results of structured compressed sensing theory. The introduction of more complex structural models has greatly advanced the application of compressed sensing theory in practice. The new structured compressed sensing theory framework extends the types of signals it can handle. Although there are many studies on structured compressed sensing and many achievements have been made, there are still many problems to be solved.
Learning of Adaptive Observation Matrix Different observation methods have a significant impact on the number of measurements required for compressed sensing reconstruction and the recovery performance of the algorithm. The structured observations determined by the sensor-aware mode used by conventional compressed sensing for stochastic observations and structured compressed sensing usually have a fixed form and cannot adapt to the internal structure of complex signals, such as based on distributed and multi-task compressed sensing. In the compressed sensing of array signals and video images, each signal source or image frame is usually sampled independently using a fixed random observation, and the correlation between the source and the image frame is not considered at the same time. At present, the learning and design of adaptive observation is limited to the theoretical preliminary research. The simulation experiment verifies its effectiveness in reducing the measurement rate and improving the reconstruction quality. Therefore, for different applications, designing an optimal adaptive structured observation matrix and simple sampling implementation technology is an important means to expand the scope of compression sensing applications, and further research is needed.
Compressed Sensing Reconstruction Based on Kernel Method It is well known that compressed sensing can recover high-dimensional signals from low-dimensional observations at the cost of a nonlinear optimization recovery process.
In recent years, some scholars have been seeking solutions to analytical forms to build more practical and efficient compressed sensing solutions. It is pointed out that by selecting the appropriate kernel mapping and forming the sparse representation of the signal in the feature space, the solution of the analytical form can be obtained by the least squares method, which is called nuclear compression sensing. Kernel-compressed sensing is essentially a compressed sensing under a nonlinear sparse representation. It not only avoids the iterative nonlinear optimization process, but also recovers the signal with fewer observations than linear sparse representation. Studying the construction and implementation of the structured model under this theory is also a development direction of structural compression perception in the future.
The original problem of non-convex structured compressed sensing reconstruction traditional compressed sensing reconstruction and structured compressed sensing reconstruction is flawed. The non-convex optimization problem under the norm is an NP-hard problem. The greedy algorithm represented by matching tracking and orthogonal matching tracking and the threshold algorithm represented by iterative hard threshold shrinking can be regarded as direct solution. The method of the problem. However, neither the greedy algorithm nor the threshold algorithm can guarantee convergence to the global optimal solution. At present, scholars have begun to use natural computing methods to solve the non-protrusive problems of compressed sensing reconstruction (Zhao Wei, Ma Ronghua, Xue Chaoqian, Li Hengjian. Signal sparse decomposition based on tree redundancy dictionary orthogonal matching pursuit. Journal of Yangzhou University (自然科学版),2011,14(4):52―55,82)(徐健,常志国。基于聚类的自适应图像稀疏表示算法及其应用。光(胡正平,刘文,许成谦。基于分类学习字典全局稀疏表示模型的图像修复算法研究。数学的认识与实践,2011,(李民,程建,李小文,乐翔。非局部学习字典的图像修复。电子与信(孙玉宝,肖亮,韦志辉,刘青山。图像稀疏表示的结构自适应子空间匹配追踪算法研究。计算机学报,2012,2011,39(1):142―148(杨海蓉,张成,丁大为,韦穗。压缩传感理论与重构算法。电子学报,(王法松,张林让,周宇。压缩感知的多重测量向量模型与算法分析。信号处理,2012,28(6 :785―792)(练秋生,王艳。基于双树小波通用隐马尔可夫树模型的图像压缩感(练秋生,肖莹。基于小波树结构和迭代收缩的图像压缩感知算法研(赵贻玖,王厚军,戴志坚。基于隐马尔可夫树模型的小波域压缩采样信号重构方法。电子测量与仪器学报,2010,(付宁,曹离然,彭喜元。基于子空间的块稀疏信号压缩感知重构算(付宁,乔立岩,曹离然。面向压缩感知的块稀疏度自适应迭代算法。电子学报,2011,39(3A):75―79)(陈书贞,李光耀,练秋生。基于非局部相似性和交替迭代优化算法的图像压缩感知。信号处理,2012,XidianUniversity,China,2012(杨丽。基于Ridgelet冗余字典和遗传进化的压缩感知重构,西安电子科技大学,中国,2012)(马红梅。基于Curvelet冗余字典和免疫克隆优化的压缩感知重构,西安电子科技大学,中国,2012)(郜国栋。基于交替学习和免疫优化的压缩感知图像重构,西安电子科技大学,中国,2012)刘芳西安电子科技大学计算机学院教授。1995年获西安电子科技大学计算机学院硕士学位。主要研究方向为信号和图像处理,学习理论与算法,模式识别。
武娇中国计量学院理学院讲师。2012年获西安电子科技大学计算机学院博士学位。主要研究方向为图像处理,机器学习,统计学习理论与算法。本文通信作杨淑媛西安电子科技大学电子工程学院教授。2005年获西安电子科技大学电子工程学院博士学位。主要研究方向为机器学习,多尺度分析,压缩采样。
焦李成西安电子科技大学电子工程学院特聘教授。1990年获西安交通大学博士学位。主要研究方向为信号和图像处理,自然计算,智能信息处理。

Cast Iron

Cast Iron Radiators,Iron Radiator,Cast Radiators,Antique Radiator

Hebei Chunfeng International Trade Co., Ltd , https://www.chunfengenergy.com