Modified MNIST: Exploring State-of-the-art Deep Learning Classification Architectures

"Nothing is better suited for the task of image classification other than the usage of neural network architectures that involves CNNs. The 2D-topological nature of pixels and high-dimensionalities in images  (i.e. height, width and colour channels), make CNNs the most suitable and popular choice to build architectures. In this case, rather than working with classical MNIST dataset, we worked with the Modified MNIST dataset. Each image had three randomly placed handwritten digits, and our goal was to identify the largest digit. Building our foundations on the idea of transfer learning, we used state-of-the-art models such as VGG, InceptionV3, Xception, ResnetV2, InceptionResnetV2, Densenet and NASNetLarge,  on the freely available Google Colab's Tesla K80 GPU."
- Parra, Luo and Raltson. Modified MNIST: Exploring State-of-the-art Deep Learning Classification Architectures

Our paper: https://drive.google.com/file/d/1U83h-mp98gqcPfC3Ixjb2D_T7xa-Og0r/view

Our repo: https://github.com/JairParra/Modified_MNIST_state-of-the-art_exploration




Comments

Popular posts