This study aims at measuring the image representation capacity of Layer-4, one of the visual processing steps in the cerebral cortex and at comparing it with pixel-like representations. Visual information that comes into Layer-4 as input is transformed into a nonlinear representation that is capable of detecting certain visual patterns, similar to an RBF-network, and then is passed to neighboring and upper layers. This internal representation, analogous to the kernel concept in the field of machine learning, specifies which frequent patterns exist in the given image patch and is also capable of easily learning and predicting its variations such as its spatial transformations. In this paper, using a Layer-4 simulation model , the sensitivity of Layer-4 representation to the spatial transformations of natural images is analyzed. Using multi-kernel support vector machine (SVM) approach using a limited number of support vectors, it has been shown that Layer-4 representation is more accurate in the targeted learning tasks.