Convolutional Neural Networks (CNNs) have become important tools for performing many machine learning tasks including image analysis and pattern recognition. Although most of the existing CNNs are currently constructed, trained and deployed on traditional computing platforms (multicore CPUs and GPUs), it is possible to lower the precision of the CNN parameters and input data without significantly sacrificing the success rate of CNN predictions.As a result, fewer bits can be used to encode CNN parameters and the input data. Such a reduction in the bit precision and representation of the data and network can potentially simplify the design of energy efficient processors suitable for deep learning. This type of processors include the TrueNorth neurosynaptic core designed by IBM and the Google’s tensor processing unit (TPU). We discuss the use of a train-then-constrain approach to modify a trained CNN by quantizing single-precision floating-point input data as well as the CNN weights and biases to a much smaller number of levels that can be represented by a few bits and a code book (a.k.a. lookup table.) We demonstrate the effectiveness of such approach on the classification of cryo-electron microscopy (CryoEM) images that are used to reconstruct 3D density map of macromolecules and molecular complexes.