Track topics on Twitter Track topics that are important to you
For many machine learning algorithms, their success heavily depends on data representation. In this paper, we present an l2,1-norm constrained canonical correlation analysis (CCA) model, that is, L2,1-CCA, toward discovering compact and discriminative representation for the data associated with multiple views. To well exploit the complementary and coherent information across multiple views, the l2,1-norm is employed to constrain the canonical loadings and measure the canonical correlation loss term simultaneously. It enables, on the one hand, the canonical loadings to be with the capacity of variable selection for facilitating the interpretability of the learned canonical variables, and on the other hand, the learned canonical common representation keeps highly consistent with the most canonical variables from each view of the data. Meanwhile, the proposed L2,1-CCA can also be provided with the desired insensitivity to noise (outliers) to some degree. To solve the optimization problem, we develop an efficient alternating optimization algorithm and give its convergence analysis both theoretically and experimentally. Considerable experiment results on several real-world datasets have demonstrated that L2,1-CCA can achieve competitive or better performance in comparison with some representative approaches for multiview representation learning.
This article was published in the following journal.
Name: IEEE transactions on cybernetics
Multiview datasets are the norm in bioinformatics, often under the label multi-omics. Multiview data is gathered from several experiments, measurements or feature sets available for the same subjects....
In this paper, we propose a general framework for incomplete multiview clustering. The proposed method is the first work that exploits the graph learning and spectral clustering techniques to learn th...
In this paper, we address the multiview nonlinear subspace representation problem. Traditional multiview subspace learning methods assume that the heterogeneous features of the data usually lie within...
Detecting coherent groups is fundamentally important for crowd behavior analysis. In the past few decades, plenty of works have been conducted on this topic, but most of them have limitations due to t...
This study introduces and evaluates a novel target identification method, Latent common source extraction (LCSE), that uses subject-specific training data for the enhancement of steady-state visual ev...
Encouraging individuals to eat vegetables is difficult. However, recent evidence suggests that using social-based information might help. For instance, it has been shown that if people thi...
Learning that most people engage in an activity can be a powerful motivator to adoption. But are there instances in which people can similarly find motivation from learning that only a min...
The purpose of the study is to develop a questionnaire on community occupational participation for persons with schizophrenia living in the community. In the International Classification o...
Encouraging individuals to eat fruit and vegetables is difficult. However, recent evidence suggests that using social-based information might help. For instance, it has been shown that if ...
The DELPhi system is a software device that is used for the noninvasive evaluation of brain plasticity and connectivity. The DELPhi software uses EEG and TMS devices as accessories. Standa...
Systematic gathering of data for a particular purpose from various sources, including questionnaires, interviews, observation, existing records, and electronic devices. The process is usually preliminary to statistical analysis of the data.
Analysis based on the mathematical function first formulated by Jean-Baptiste-Joseph Fourier in 1807. The function, known as the Fourier transform, describes the sinusoidal pattern of any fluctuating pattern in the physical world in terms of its amplitude and its phase. It has broad applications in biomedicine, e.g., analysis of the x-ray crystallography data pivotal in identifying the double helical nature of DNA and in analysis of other molecules, including viruses, and the modified back-projection algorithm universally used in computerized tomography imaging, etc. (From Segen, The Dictionary of Modern Medicine, 1992)
A class of statistical methods applicable to a large set of probability distributions used to test for correlation, location, independence, etc. In most nonparametric statistical tests, the original scores or observations are replaced by another variable containing less information. An important class of nonparametric tests employs the ordinal properties of the data. Another class of tests uses information about whether an observation is above or below some fixed value such as the median, and a third class is based on the frequency of the occurrence of runs in the data. (From McGraw-Hill Dictionary of Scientific and Technical Terms, 4th ed, p1284; Corsini, Concise Encyclopedia of Psychology, 1987, p764-5)
Signal and data processing method that uses decomposition of wavelets to approximate, estimate, or compress signals with finite time and frequency domains. It represents a signal or data in terms of a fast decaying wavelet series from the original prototype wavelet, called the mother wavelet. This mathematical algorithm has been adopted widely in biomedical disciplines for data and signal processing in noise removal and audio/image compression (e.g., EEG and MRI).
The statistical manipulation of hierarchically and non-hierarchically nested data. It includes clustered data, such as a sample of subjects within a group of schools. Prevalent in the social, behavioral sciences, and biomedical sciences, both linear and nonlinear regression models are applied.
Complementary and Alternative Medicine
Alternative medicine are whole medical systems that did not fit with conventional medicine as they have completely different philosophies and ideas on the causes of disease, methods of diagnosis and approaches to treatment. Although often overlapping, co...