Generic placeholder image

Current Chinese Science

Editor-in-Chief

ISSN (Print): 2210-2981
ISSN (Online): 2210-2914

Research Article Section: Automation and Control Systems

Geometry and Topology of Conceptual Representations of Simple Visual Data

Author(s): Serge Dolgikh*

Volume 3, Issue 2, 2023

Published on: 26 December, 2022

Page: [84 - 95] Pages: 12

DOI: 10.2174/2210298103666221130101950

Price: $65

Open Access Journals Promotions 2
Abstract

Introduction: Representations play an essential role in learning artificial and biological systems by producing informative structures associated with characteristic patterns in the sensory environment. In this work, we examined unsupervised latent representations of images of basic geometric shapes with neural network models of unsupervised generative self-learning.

Background: Unsupervised concept learning with generative neural network models.

Objective: Investigation of structure, geometry and topology in the latent representations of generative models that emerge as a result of unsupervised self-learning with minimization of generative error. Examine the capacity of generative models to abstract and generalize essential data characteristics, including the type of shape, size, contrast, position and orientation.

Methods: Generative neural network models, direct visualization, density clustering, and probing and scanning of latent positions and regions.

Results: Structural consistency of latent representations; geometrical and topological characteristics of latent representations examined and analysed with unsupervised methods. Development and verification of methods of unsupervised analysis of latent representations.

Conclusion: Generative models can be instrumental in producing informative compact representations of complex sensory data correlated with characteristic patterns.

Keywords: Unsupervised learning, representation learning, concept learning, artificial intelligence, clustering, sensory data.

« Previous
Graphical Abstract
[1]
Bruner, J.S.; Goodnow, J.J.; Austin, G.A. A study of thinking; John Wiley and Sons: New York, USA, 1956.
[2]
Bengio, Y.; Courville, A.; Vincent, P. Representation Learning: a review and new perspectives. arXiv, 2014.
[3]
Hinton, G.E.; Osindero, S.; Teh, Y.W. A fast learning algorithm for deep belief nets. Neural Comput., 2006, 18(7), 1527-1554.
[http://dx.doi.org/10.1162/neco.2006.18.7.1527] [PMID: 16764513]
[4]
Fischer, A.; Igel, C. Training restricted Boltzmann machines: An introduction. Pattern Recognit., 2014, 47(1), 25-39.
[http://dx.doi.org/10.1016/j.patcog.2013.05.025]
[5]
Bengio, Y. Learning deep architectures for AI. Found. Trends Mach. Learn., 2009, 2(1), 1-127.
[http://dx.doi.org/10.1561/2200000006]
[6]
Coates, A.; Lee, H.; Ng, A.Y. An analysis of single-layer networks in unsupervised feature learning. 14th International Conference on Artificial Intelligence and Statistics (AISTATS); Lauderdale, FL, USA, 2011, vol. 15, pp. 215-223.
[7]
Kingma, D.P.; Welling, M. An introduction to variational autoencoders. Found. Trends Mach. Learn., 2019, 12(4), 307-392.
[http://dx.doi.org/10.1561/2200000056]
[8]
Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A.A. Generative adversarial networks: an overview. IEEE Signal Process. Mag., 2018, 35(1), 53-65.
[http://dx.doi.org/10.1109/MSP.2017.2765202]
[9]
Partaourides, H.; Chatzis, S.P. Asymmetric deep generative models. Neurocomputing, 2017, 241, 90-96.
[http://dx.doi.org/10.1016/j.neucom.2017.02.028]
[10]
Hinton, G.E.; Zemel, R.S. Autoencoders, minimum description length and Helmholtz free energy. Adv. Neural Inf. Process. Syst., 1994, 6, 3-10.
[11]
Ranzato, M.A.; Boureau, Y-L.; Chopra, S.; LeCun, Y. A unified energy-based framework for unsupervised learning. 11th International Conference on Artificial Intelligence and Statistics (AISTATS); San Huan, Puerto Rico, 2007, vol. 2, pp. 371-379.
[12]
Le, Q.V.; Ransato, M.A.; Monga, R.; Devin, M.; Chen, K.; Corrado, G.S. Building high level features using large scale unsupervised learning arXiv, 2012.
[13]
Higgins, I.; Matthey, L.; Glorot, X.; Pal, A.; Uria, B.; Blundell, C. Early visual concept learning with unsupervised deep learning arXiv, 2016.
[14]
Dolgikh, S. Spontaneous concept learning with deep autoencoder. International J. Compu. Intel. Sys., 2018, 12(1), 1-12.
[http://dx.doi.org/10.2991/ijcis.2018.25905178]
[15]
Zhou, C.; Paffenroth, R.C. Anomaly detection with robust deep autoencoders. 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; Halifax, Canada, 2017, pp. 665-674.
[http://dx.doi.org/10.1145/3097983.3098052]
[16]
Gondara, L. Medical image denoising using convolutional denoising autoencoders. 16th IEEE International Conference on Data Mining Workshops (ICDMW),; Barcelona, Spain, 2016, pp. 241-246.
[http://dx.doi.org/10.1109/ICDMW.2016.0041]
[17]
Prystavka, P.; Cholyshkina, O.; Dolgikh, S.; Karpenko, D. Automated object recognition system based on aerial photography. 10th International Conference on Advanced Computer Information Technologies (ACIT); Deggendorf, Germany, 2020, pp. 830-833.
[18]
Lauly, S.C.A.P.; Larochelle, S.H.; Khapra, M.M. et al. An autoencoder approach to learning bilingual word representations. 27th International Conference on Neural Information Processing Systems (NIPS’14); Montreal, Canada, 2014, vol. 2, pp. 1853-1861.
[19]
Rodriguez, R.C.; Alaniz, S.; Akata, Z. Advances in neural information processing systems Vancouver, Canada, Modeling conceptual understanding in image reference games, 2019, 13155-13165.
[20]
Wang, Q.; Young, S.; Harwood, A.; Ong, C.S. Discriminative concept learning network: reveal high-level differential concepts from shallow architecture. 2015 International Joint Conference on Neural Networks (IJCNN); Killarney, Ireland, 2015, pp. 1-9.
[http://dx.doi.org/10.1109/IJCNN.2015.7280525]
[21]
Mitchell, T.M. Generalization as search. Artif. Intell., 1982, 18(2), 203-226.
[http://dx.doi.org/10.1016/0004-3702(82)90040-6]
[22]
Lake, B.M.; Salakhutdinov, R.; Tenenbaum, J.B. The Omniglot challenge: a 3-year progress report. Curr. Opin. Behav. Sci., 2019, 29, 97-104.
[http://dx.doi.org/10.1016/j.cobeha.2019.04.007]
[23]
Shi, J.; Xu, J.; Yao, Y.; Xu, B. Concept learning through deep reinforcement learning with memory-augmented neural networks. Neural Netw., 2019, 110, 47-54.
[http://dx.doi.org/10.1016/j.neunet.2018.10.018] [PMID: 30496914]
[24]
Sutton, R.S.; Precup, D.; Singh, S. Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artif. Intell., 1999, 112(1-2), 181-211.
[http://dx.doi.org/10.1016/S0004-3702(99)00052-1]
[25]
Nevens, J.; Van Eecke, P.; Beuls, K. From continuous observations to symbolic concepts: a discrimination-Based Strategy for Grounded Concept Learning. Front. Robot. AI, 2020, 7(84), 84.
[http://dx.doi.org/10.3389/frobt.2020.00084] [PMID: 33501251]
[26]
Yoshida, T.; Ohki, K. Natural images are reliably represented by sparse and variable populations of neurons in visual cortex. Nat. Commun., 2020, 11(1), 872.
[http://dx.doi.org/10.1038/s41467-020-14645-x] [PMID: 32054847]
[27]
Bao, X.; Gjorgieva, E.; Shanahan, L.K.; Howard, J.D.; Kahnt, T.; Gottfried, J.A. Grid-like neural representations support olfactory navigation of a two-dimensional odor space. Neuron, 2019, 102(5), 1066-1075.e5.
[http://dx.doi.org/10.1016/j.neuron.2019.03.034] [PMID: 31023509]
[28]
Meijering, E.; Calhoun, V.D.; Menegaz, G.; Miller, D.J.; Ye, J.C. Deep Learning in Biological Image and Signal Processing. [From the Guest Editors]. IEEE Signal Process. Mag., 2022, 39(2), 24-26.
[http://dx.doi.org/10.1109/MSP.2021.3134525] [PMID: 36186087]
[29]
Roth, G.; Dicke, U. Evolution of the brain and intelligence. Trends Cogn. Sci., 2005, 9(5), 250-257.
[http://dx.doi.org/10.1016/j.tics.2005.03.005] [PMID: 15866152]
[30]
Garm, A.; Poussart, Y.; Parkefelt, L.; Ekström, P.; Nilsson, D-E. The ring nerve of the box jellyfish Tripedalia cystophora. Cell Tissue Res., 2007, 329(1), 147-157.
[http://dx.doi.org/10.1007/s00441-007-0393-7] [PMID: 17340150]
[31]
Le, Q.V. A tutorial on deep learning: autoencoders, convolutional neural networks and recurrent neural networks; Stanford University, 2015.
[32]
Keras: Python deep learning library. Available from: https://keras.io/ [Accessed: Jan. 21, 2021].
[33]
El Korchi, A. 2D geometric shapes dataset Mendeley Data, 2020.
[http://dx.doi.org/10.17632/wzr2yv7r53.1]
[34]
Spall, J.C. Introduction to stochastic search and optimization: estimation, simulation, and control; Wiley: Hoboken, New Jersey, 2003.
[http://dx.doi.org/10.1002/0471722138]
[35]
Fukunaga, K.; Hostetler, L. The estimation of the gradient of a density function, with applications in pattern recognition. IEEE Trans. Inf. Theory, 1975, 21(1), 32-40.
[http://dx.doi.org/10.1109/TIT.1975.1055330]
[36]
Rosch, E.H. Natural categories. Cognit. Psychol., 1973, 4(3), 328-350.
[http://dx.doi.org/10.1016/0010-0285(73)90017-0]
[37]
Feinberg, T.E.; Mallatt, J. The nature of primary consciousness. A new synthesis. Conscious. Cogn., 2016, 43, 113-127.
[http://dx.doi.org/10.1016/j.concog.2016.05.009] [PMID: 27262691]
[38]
Zhou, X.; Belkin, M. Semi-supervised learning. In: Academic Press Library in Signal Processing; Elsevier, 2014.
[http://dx.doi.org/10.1016/B978-0-12-396502-8.00022-X]
[39]
Dolgikh, S. Generative conceptual representations and semantic communications. Int. J. Comp. Info. Sys. Ind. Manag. Appli., 2022, 14, 239-248.

Rights & Permissions Print Cite
© 2024 Bentham Science Publishers | Privacy Policy