Multi-task feature selection in microarray data by binary integer programming

A major challenge in microarray classification is that the number of features is typically orders of magnitude larger than the number of examples. In this paper, we propose a novel feature filter algorithm to select the feature subset with maximal discriminative power and minimal redundancy by solving a quadratic objective function with binary integer constraints. To improve the computational efficiency, the binary integer constraints are relaxed and a low-rank approximation to the quadratic term is applied. The proposed feature selection algorithm was extended to solve multi-task microarray classification problems. We compared the single-task version of the proposed feature selection algorithm with 9 existing feature selection methods on 4 benchmark microarray data sets. The empirical results show that the proposed method achieved the most accurate predictions overall. We also evaluated the multi-task version of the proposed algorithm on 8 multi-task microarray datasets. The multi-task feature selection algorithm resulted in significantly higher accuracy than when using the single-task feature selection methods.


Background
Microarray technology has the ability to simultaneously measure expression levels of thousands of genes for a given biological sample, which is classified into one of the several categories (e.g., cancer vs. control tissues). Each sample is represented by a feature vector of gene expressions obtained from a microarray. Using a set of microarray samples with known class labels, the goal is to learn a classifier able to classify a new tissue sample based on its microarray measurements. A typical microarray classification data set contains a limited number of labeled examples, ranging from only a few to several hundred. Building a predictive model from such smallsample high-dimensional data is a challenging problem that has received a significant attention in machine learning and bioinformatics communities. To reduce the risk of over-fitting, a typical strategy is to select a small number of features (i.e., genes) before learning a classification model. As such, feature selection [1,2] becomes an essential technique in microarray classification.
There are several reasons for feature selection in microarray data, in addition to improving the classifier's generalization ability. First, the selected genes might be of interest to domain scientists interested in identifying disease biomarkers. Second, building a classifier from a small number of features could result in an easily interpretable model that could give important clues to biologists. Depending on how the feature selection process is combined with model learning process, feature selection techniques can be organized into three categories. (1) Filter methods [3] are independent of the learning algorithm. (2) Wrapper methods [4] are coupled with the learning algorithm using heuristics such as forward selection and backward elimination. (3) Embedded methods [5,6] integrate feature selection as a part of the classifier training. Both the wrapper and the embedded methods effectively introduce hyper-parameters that require computationally costly nested cross-validation and increase likelihood of over-fitting. Feature filter methods are very popular because they are typically conceptually simple, computationally efficient, and robust to over-fitting. These properties also explain why the filter methods are more widely used than the other two approaches in microarray data classification.
Traditional filter methods rank the features based on their correlation with the class label and then select the top ranked features. The correlation can be measured by statistic tests (e.g., t-test) or by information-theoretic criteria such as mutual information. The filter methods easily scale up to high dimensional data and can be used in conjunction with any supervised learning algorithm. However, because the traditional filter methods access each feature independently, highly correlated features tend to have similar rankings and tend to be selected jointly. Using redundant features could result in low classification accuracy. As a result, one common improvement for filter methods is to reduce redundancy between selected features. For example, minimal-redundancymaximal-relevance (mRMR) proposed by [7] selects the feature set with both maximal relevance to the target class and minimal redundancy among the selected feature set. Because of the high computational cost of considering all possible feature sets, the mRMR algorithm selects features greedily, minimizing their redundancy with features chosen in previous steps and maximizing their relevance to the target class.
A common critique of popular feature selection filters is that they are typically based on relatively simple heuristics. To address this concern, recent research resulted in more principled formulation of feature filters. For example, algorithms proposed in [8] and [9] attempt to select the feature subset with maximal relevance and minimal redundancy by solving a constrained quadratic optimization problem (QP). The objective used by [8] is a combination of a quadratic term and a linear term. The redundancy between feature pairs is measured by the quadratic term and the relevance between features and class label is measured by the linear term. The features are ranked based on a weight vector obtained by solving a QP problem. The main limitation of this method is that the relevance between a feature and the class label is measured by either Pearson correlation or mutual information. However, Pearson correlation assumes normal distribution of the measurements, which might not be appropriate to measure correlation between numerical features and binary target. The mutual information requires using discrete variables and is sensitive to discretization. The objective used by [9] contains only one quadratic term. This quadratic term consists of two parts: one measures feature relevance using mutual information between features and the class label, and another measures feature redundancy using mutual information between each feature pair. However, the square matrix in the proposed quadratic term is not positive semi-definite. Thus, the resulting optimization problem is not convex and could result in poor local optima.
In this paper, we propose a novel feature filter method to find the feature subset which maximizes the inter-class separability and intra-class tightness, and minimizes the pairwise correlations between selected features. We formulate the problem as a quadratic programming with binary integer constraints. For high dimensional microarray data, to solve the proposed quadratic programming problem with binary integer constraints requires high time and space cost. Therefore, we relax binary integer constraints and apply the low rank approximation to the quadratic term in the objective function. The resulting objective function can be efficiently solved to obtain a small subset of features with maximal relevance and minimal redundancy.
In many real-life microarray classification problems, the size of the given microarray dataset is particularly small (e.g., we might have less than 10 labeled high-dimensional examples). In this case, even the most carefully designed feature selection algorithms are bound to underperform. Probably the only remedy is to borrow strength from external microarray datasets. Recent research [10,11] illustrates that multi-task feature selection algorithms can improve the classification accuracy. The multi-task feature selection algorithms select the informative features jointly across many different microarray classification data sets. Following this observation, we extend our feature selection algorithm to the multi-task microarray classification setup.
The contributions of this paper can be summarized as follows.(1) We propose a novel gene filter method which can obtain a feature subset with maximal discriminative power and minimal redundancy; (2) The globally optimal solution can be found efficiently by relaxing the integer constraints and using a low-rank approximation technique; (3) We extend our feature selection method to multi-task classification setting; (4) The experimental results show our algorithms achieve higher accuracy than the existing filter feature selection methods, both in single-task learning and multi-task settings.

Results and discussion
We compared our proposed feature algorithm with 9 representative feature selection filters. The first 6 are standard feature selection filters: Pearson Correlation (PC), ChiSquare [3], GINI, Infogain, Kruskal-Wallis test and Relief [12]. They rank the features based on different criteria that measure correlation between each feature and class label. The remaining 3 are the state-of-the-art feature selection methods which are able to remove redundant features: mRMR [7], QPFS [8] and SASMIF [9]. The feature similarity for both QPFS and our algorithm was measured by Pearson correlation. For fair comparison, for the SASMIF method we used top m ranked features. To balance the effect of feature relevance and feature redundancy, the parameter λ in (9) was set to as suggested in [13]. Our algorithm is denoted as ST-BIP for single task version and MT-BIP for multi-task version.
Given the selected features, we used LIBLINEAR [14] to train the linear SVM model. The linear SVM model was chosen because previous studies [5] showed SVM classifier could be very accurate on microarray data. The regularization parameter C of LIBLINEAR was chosen among {10 -3 , 10 -4 , …, 10 3 }. For the experiments in the single-task scenario, we used the nested 5 cross validation to select the optimal regularization parameter. For experiments in multi-task learning scenario, it was too time consuming to use the nested cross-validation to select the regularization parameter. Thus, we simply fixed the regularization parameter to 1 in the multi-task experiments.

Single task feature selection
In this section, we evaluate our proposed feature selection algorithm for single-task learning using four benchmark microarray gene expression cancer datasets: (1) Colon dataset [15] containing 62 samples, 40 tumor and 22 normal samples; (2) Lung dataset [16] containing 86 samples coming from 24 patients that died and 62 that survived; (3) Diffuse B-cell Lymphoma (DLBCL) dataset [17] containing 77 samples, 58 coming from DLBCL patients and 19 from Bcell lymphoma patients. (4) Myeloma dataset [18] containing 173 samples, 137 coming from patients with bone lytic lesions and 36 from control patients. We summarize the characteristics of these datasets in Table 1.
For each microarray dataset, we randomly selected 20 positive and 20 negative examples (except for choosing 15 positive and 15 negative in DLBCL dataset) as the training set and the rest as the test set. Due to the class imbalance in test sets, we used AUC, the area under the Receiver Operating Characteristic (ROC) curve, to evaluate the performance. The average AUC based on 10 repetitions of experiments on different random splits to training and test set are reported in Table 2. We Compared the AUC accuracy of different feature selection algorithms for m = 20, 50, 100, 200, 1000. For each dataset, the best AUC score among all methods was emphasized in bold. As shown in Table 2, our proposed method achieved the highest accuracy on Colon and DLBCL datasets. On the Myeloma dataset, it had the highest accuracy when m = 100 and 1000 and had the second highest accuracy when m = 20, 50 and 200. On the Lung dataset, our algorithm was ranked in the upper half of the competing algorithms. The last column in Table 2 shows the average AUC score across four different datasets. Our method achieved the highest average AUC scores. The next two successful feature selection algorithms are Relief and QSFS. The mRMR had somewhat lower accuracy, comparable to simple filters such as PC, ChiSquare, GINI and InfoGain. SASMIF was considerably less accurate, while KW was the least successful.

Multi-task feature selection
In this section, we evaluate our proposed feature selection algorithm for multi-task learning. We used 8 cancer related binary microarray classification datasets published in [19]. The data are summarized in Table 3. As shown in Table 3, the size of the 8 microarray datasets was very small. The single-task feature selection algorithms are not expected to perform well because there might be insufficient information even when simple feature selection filters are used. In contrast, our multi-task feature selection algorithm is expected to improve the accuracy by borrowing strength across multiple microarray datasets.
For each microarray data set, we randomly selected N + = 2, 3, 4, 5 positive and the same number of negative examples as the training data and used the rest as the test data. We show the results for m = 100 in this section. The average AUC across these 8 microarray datasets is shown in Figure 1. The results clearly show the multi-task version of our proposed algorithm was the most successful algorithm overall.
To gain a deeper understanding about the reason why the multi-task feature selection algorithm obtained better overall accuracy than single-task feature selection algorithms, we show the AUC score of each individual microarray dataset based on N + = 3 in Table 4. We can see that the single task version of our feature selection algorithm had the highest overall accuracy among other single-task benchmarks, a result consistent with Table 2. The multitask version of our algorithm has higher AUC than its single task version on 4 datasets and its average AUC is about 1.5% higher. In 4 cases, (e.g. Colon, Lung, Pancreas, Renal datasets) we can also observe the negative transfer, where the accuracy drops. How to prevent negative transfer in multi-task feature selection would be another interest research topic for our future research.

Gene-annotation enrichment analysis for multi-task microarray datasets
The multi-task experimental results show that accuracies obtained by MT-BIP are better than other single task feature filters overall. So we would like to perform function annotation of the MP selected genes. In MT-BIP filter, only one selected gene list is obtained for all 8 different types of cancers. Given this gene list, the top 10 enriched GO terms were obtained using DAVID Bioinformatics  Resources [20]. The top 10 enriched GO terms based on MT-BIP selected gene list is shown in Table 5. In this table, the hits means the number of genes that are found in the selected gene list associating with the specific GO term. The p-value was obtained by Fisher Exact test which is used to measure the gene-enrichment in annotation terms. After we got the enriched GO terms, we used the Comparative Toxicogenomics Database (CTD) [21] to check whether there is an association between the GO term and the cancer type. The last column in Table 5 shows the disease association for each GO term. The datasets are ordered as Bladder (B), Breast (B), Colon (C), Lung (L), Pancreas (P), Prostate (P), Renal (R) and Uterus (U). If a GO term is associated with the given type of cancer, we write down the cancer name. Otherwise, we put the symbol # in that position. We could see that the enriched GO terms based MT-BIP tends to associate many different types of cancer. As shown in Table 5, GO:0005856 (cytoskeleton) and GO:0005886 (plasma membrane) were associated with 7 different cancers. GO:003054 (cell junction), GO:0015629 (actin cytoskeleton) and GO:0032403 (protein complex binding) are associated with 6 different cancers.

Conclusion
We proposed a novel feature filter method to select a feature subset with discriminative power and minimal redundancy. The proposed feature selection method is based on quadratic optimization problem with binary integer constraints. It can be solved efficiently by relaxing the binary integer constrains and applying a low-rank approximation to the quadratic term in the objective. Furthermore, we extend our feature selection algorithm to multi-task classification problems. The empirical results on a number of microarray datasets show that in the single task scenario the proposed algorithm results in higher accuracy than the existing feature selection methods. The results also suggest that our multitask feature selection algorithm can further improve the microarray classification performance.

Feature selection by binary integer programming
Let us denote the training dataset as D = (x i , y i ), i = 1, …, N , where x i is an M dimensional feature vector for the i-th example and y i is its class label. N is the number of training examples. Our objective is to select a feature subset that is strongly predictive of class label and has low redundancy. We introduce a binary vector w = [w 1 , w 2 , . . . , w M ] T to indicate which features are selected:  So, the new feature vector for the i-th example after feature selection can be represented as g i = x i ⊙ w, where the symbol ⊙ denotes the pairwise product. Therefore, g ij = x ij , for w j = 1 and g ij = 0 for w j = 0. Alternatively, g i can be represented as g i = Wx i , where W is a diagonal matrix and its diagonal is the vector w.
Intuitively, we would like the examples with the same class to be close (intra-class tightness) and the examples from different classes to be far away (inter-class separability) in the spaces defined by selected features. The Euclidean distance between two examples x i and x j in the new feature space can be calculated as The inter-class separability of the data can be measured by a sum of the pairwise distances between examples with different class labels The intra-class tightness of the data can be measured by a sum of the pair-wise distances between examples with the same class label Therefore, the problem of selecting a feature subset to maximize the intra-class tightness and inter-class separability can be formulated as Objective (5) can be rewritten as where matrix A is defined as: In addition to the objective (5) or (6), in order to improve the diversity of selected features, we would like to select a feature subset with minimal redundancy. A feature is defined to be redundant if there is another feature highly correlated with it. Let us denote Q as a symmetric positive semidefinite matrix with size M × M, whose element Q ij represents the similarity between feature i and feature j. Since the measurements of each feature across different samples are normal distributed, it is reasonable to use Pearson Correlation to measure the similarity between two features here. Also, the similarity matrix Q is positive semi-definite when Pearson  Correlation is used. Then, we define a redundancy among the selected set of features represented by vector w as their average pair-wise similarity w T Qw/m 2 , where m is the number of selected features. Our objective is to minimize the redundancy defined in such way. The first contribution of this paper is to formulate the feature selection task as a new quadratic programming problem subject to binary integer and linear constraints as follows, The first term in (8), which is a linear term as shown in the following Proposition 1, tries to maximize the interclass separability and intra-class tightness of the data. It describes the discriminative power of the selected feature subset. The second quadratic term is the average pairwise similarity score between the selected features, which results in reduction of feature redundancy. Parameter λ is introduced to control the tradeoff between feature relevance and feature redundancy. Since Q is a positive semidefinite matrix, the proposed objective function is convex. The first constraint ensures that the resulting vector w is binary, while the second constraint ensures that exactly m features are selected. The following proposition establishes that the first term in the objective (8) is linear. Proposition 1. The first term of the objective function (8) can be written as a linear term c T w, where c is vector of size M with elements c i = (X T LX) ii , L is the Laplacian matrix of A, defined as L = D − A. D is a diagonal degree matrix such that D ii = j A ij. The X is the N × M feature matrix. Each row in X corresponds to one example. (X T LX) ii denotes the i-th element in the diagonal of the matrix X T LX.
Proof. Let us denote W as a diagonal matrix where W ii = w i . Then, Based on Proposition 1, objective (8) can be rewritten as the following constrained quadratic optimization problem, There are two practical obstacles in solving (9): (1) Binary constraint of variable w, and (2) feature similarity matrix Q is with size M × M, which implies high computational cost for high dimensional data. In the next two sections, we will first relax the binary constraint, and then we will apply a low-rank approximation to Q. The resulting constrained optimization problem can be solved very efficiently, with linear time with respect to the number of features M.
Problem relaxation. Due to the binary constraint on the indictor vector w, it is difficult to solve (9) [9]. To resolve this, we first relax the binary constraint on w by allowing its elements w i to be within the range [0, m]. Then, (9) could be approximated by Now, (10) becomes a standard Quadratic Programming (QP) problem. The optimal solution can be obtained by a general QP solver (e.g., MOSEK [22]).
Low-rank approximation. The matrix Q in (10) is of size M × M . So, it results in high time and space cost if we work with high dimensional microarray data. Therefore, we would like to avoid the computational bottleneck by using low-rank approximation techniques.
The matrix Q in (10) is symmetric positive semidefinite. So, it can be decomposed as Q = UΛU T , where U is a matrix of eigenvectors and Λ is a diagonal matrix with corresponding eigenvalues of Q. By setting α = 1 2 U T w , it follows that w = U − 1 2 α . Therefore, problem (10) can be rewritten as Typically, the rank of Q (let us denote it as k) is much smaller than M, k M. Therefore, we can replace the full eigenvector and eigenvalue matrices U and Λ by the top k eigenvectors and eigenvalues, resulting in an M × k matrix U k and a k × k diagonal matrix Λ k , without losing much information. Therefore, (11) is reformulated as Since a is a vector with length k, k M. the QP (11) is reduced to a new QP in a k-dimensional space with M + 1 constraints. Once the solution a of (12) is obtained, the variable w in original space can be approximated by w = U k − 1 2 k α . Decomposing matrix Q requires O(M 3 ) time, which is expensive in microarray data where M is large. Next we will show how to efficiently compute the top k eigenvectors and eigenvalues using Nystrom approximation technique [23]. Nystrom method approximates a M × M symmetric, positive semi-definite matrix Q by where E Mk denotes the sub-matrix of Q created by selecting k of its columns, and W kk is a sub-matrix that corresponds to the intersection of the selected columns and rows. Sampling schemes in Nystrom method include random sampling [23], probabilistic sampling [24], and k-means based sampling [13]. We chose the k-means sampling in our experiments because [13] showed that it produces very good low-rank approximations at a relatively low cost. Given (13), we can easily obtain the low rank approximation of Q as As shown in the following Proposition 2, the top k eigenvectors and eigenvalues can be computed in O(Mk 2 ) time using Nystrom method, which is much more efficient than doing eigen-decomposition of Q, which requires O(M 3 ) time.
Proposition 2. The top k eigenvectors U k and the corresponding eigenvectors Λ k of Q = GG T can be approximated as Λ k = Λ G and U k = GU G − 1 2 G , where U G and Λ G are obtained by the eigen-decomposition of k × k matrix G T G = U G G U T regularizer over all bs across K classification tasks could be expressed as M j=1 ( K t=1 ||β j t || 2 ), where β j t is the coefficient of the j-th feature in the t-th task. Due to the ℓ 1 norm on the ℓ 2 norm of group of coefficients of each feature across K tasks, the ℓ 1,2 norm regularizer selects the same feature subset across K tasks. However, the ℓ 1,2 norm regularized problem is challenging to solve because the non-smoothness of the ℓ 1,2 norm. In this section, we would like to show our proposed feature selection can be easily extended to multi-task learning version. The resulting objective optimization problem have the same form as objective (9), which can be solve efficiently as shown in previous section.
Let us denote w t as the binary indicator defined in (1) to represent the selected feature subset of the t-th classification task. If we do not consider the relatedness between these K classification task, individual w t could be obtained by applying Algorithm 1 to different classification tasks. Based on the conclusion given by [10,11], it would be beneficial to select the same feature subset across K related classification task. In our case, this is can be achieved by setting w t = w ∀ t. Therefore, the same feature across K tasks, defined by vector w, can be obtained by solving the following optimization problem, where c j and Q j are the linear and quadratic terms of the QP corresponding to the j-th task. The details about how to compute the c j and Q j are explained in the previous section. The technique of relaxing binary integer constraints and applying low-rank approximation to Q introduced in the previous section can be used to solve (15). The extended multi-task feature selection algorithm is also a feature filter. It can be used in conjunction with any supervised learning algorithm.