Abstract
Abstract. Active inference provides a general framework for behavior
and learning in autonomous agents. It states that an agent will attempt
to minimize its variational free energy, defined in terms of beliefs over
observations, internal states and policies. Traditionally, every aspect of a
discrete active inference model must be specified by hand, i.e. by manually
defining the hidden state space structure, as well as the required distributions such as likelihood and transition probabilities. Recently, efforts
have been made to learn state space representations automatically from
observations using deep neural networks. In this paper, we present a novel
approach of learning state spaces using quantum physics-inspired tensor
networks. The ability of tensor networks to represent the probabilistic
nature of quantum states as well as to reduce large state spaces makes
tensor networks a natural candidate for active inference. We show how
tensor networks can be used as a generative model for sequential data.
Furthermore, we show how one can obtain beliefs from such a generative
model and how an active inference agent can use these to compute the
expected free energy. Finally, we demonstrate our method on the classic
T-maze environment.
and learning in autonomous agents. It states that an agent will attempt
to minimize its variational free energy, defined in terms of beliefs over
observations, internal states and policies. Traditionally, every aspect of a
discrete active inference model must be specified by hand, i.e. by manually
defining the hidden state space structure, as well as the required distributions such as likelihood and transition probabilities. Recently, efforts
have been made to learn state space representations automatically from
observations using deep neural networks. In this paper, we present a novel
approach of learning state spaces using quantum physics-inspired tensor
networks. The ability of tensor networks to represent the probabilistic
nature of quantum states as well as to reduce large state spaces makes
tensor networks a natural candidate for active inference. We show how
tensor networks can be used as a generative model for sequential data.
Furthermore, we show how one can obtain beliefs from such a generative
model and how an active inference agent can use these to compute the
expected free energy. Finally, we demonstrate our method on the classic
T-maze environment.
Original language | English |
---|---|
Number of pages | 14 |
Publication status | Published - 24 Oct 2022 |
Event | 3rd International Workshop on Active Inference (IWAI 2022) - Grenoble, France Duration: 19 Sept 2022 → 19 Sept 2022 https://iwaiworkshop.github.io/ |
Seminar/Workshop
Seminar/Workshop | 3rd International Workshop on Active Inference (IWAI 2022) |
---|---|
Country/Territory | France |
City | Grenoble |
Period | 19/09/22 → 19/09/22 |
Internet address |
Austrian Fields of Science 2012
- 102019 Machine learning
Keywords
- tensor networks
- magine learning