Canonical Correlation Analysis (CCA) aims at identifying linear dependencies between two different but related multivariate views of the same underlying semantics. Ignoring its various extensions to more than two views, CCA uses these two views as complex labels to guide the search of maximally correlated projection vectors (covariates). Therefore, CCA can overfit the training data, meaning that different correlated projections can be found when the two-view training dataset is resampled. Although, to avoid such overfitting, ensemble approaches that utilize resampling techniques have been effectively used for improving generalization of many machine learning methods, an ensemble approach has not yet been formulated for CCA. In this paper, we propose an ensemble method for obtaining a final set of covariates by combining multiple sets of covariates extracted from subsamples. In comparison to those obtained by the application of the classical CCA on the whole set of training data, combining covariates with weaker correlations extracted from a number of subsamples of the training data produces stronger correlations that generalize to unseen test examples. Experimental results on emotion recognition, digit recognition, content-based retrieval, and multiple view object recognition have shown that ensemble CCA has better generalization for both the test set correlations of the covariates and the test set accuracy of classification performed on these covariates.