Fleet  0.0.9
Inference in the LOT
Fleet - Fast inference in the Language of Thought

Introduction

Fleet is a C++ library for programming language of thought models. In these models, you will typically specify a grammar of primitive operations which can be composed to form complex hypotheses. These hypotheses are best thought of as programs in a mental programming language, and the job of learners is to observe data (typically inputs and outputs of programs) and infer the most likely program to have generated the outputs from the inputs. This is accomplished in Fleet by using a fully-Bayesian setup, with a prior over programs typically defined thought a Probabilistic Context-Free Grammar (PCFG) and a likelihood model that typically says that the output of a program is observed with some noise.

Fleet is most similar to LOTlib (https://github.com/piantado/LOTlib3) but is considerably faster. LOTlib converts grammar productions into python expressions which are then evaled in python; this process is flexible and powerful, but very slow. Fleet avoids this by implementing a lightweight stack-based virtual machine in which programs can be directly evaluated. This is especially advantageous when evaluating stochastic hypotheses (e.g. those using flip() or sample()) in which multiple execution paths must be evaluated. Fleet stores these multiple execution traces of a single program in a priority queue (sorted by probability) and allows you to rapidly explore the space of execution traces.

Fleet is structured to automatically create this virtual machine and a grammar for programs from just the type specification on primitives. The bulk of a Fleet program is therefore in specifying the primitives that are used and a likelihood model that scores any potential program against the data.

To accomplish this, Fleet makes heavy use of C++ template metaprogramming. It requires strongly-typed functions and requires you to specify the macro FLEET_GRAMMAR_TYPES in order to tell its virtual machine what kinds of variables must be stored. In addition, Fleet uses a std::tuple named PRIMITIVES in order to help define the grammar. This tuple consists of a collection of Primitive objects, essentially just lambda functions and weights). The input/output types of these primitives are automatically deduced from the lambdas (using templates) and the corresponding functions are added to the grammar. Note that the details of this mechanism may change in future versions in order to make it easier to add grammar types in other ways. In addition, Fleet has a number of built-in operations, which do special things to the virtual machine (including Builtin::Flip, which stores multiple execution traces; Builtin::If which uses short-circuit evaluation; Builtin::Recurse, which handles recursives hypotheses; and Builtin::X which provides the argument to the expression). These are not currently well documented but should be soon. *

Installation

Fleet is based on header-files, and requires no additional dependencies. Command line arguments are processed in CL11.hpp, which is included in src/dependencies/.

The easiest way to begin using Fleet is to modify one of the examples. For simple rational-rules style inference, try Models/RationalRules; for an example using stochastic operations, try Models/FormalLanguageTheory-Simple.

Fleet is developed using GCC 9 (version >8 required).

Installation

code goes here

Installation

Fleet provides a number of simple inference routines to use. These are all displayed in Models/FormalLanguageTheory-Simple.

Markov-Chain Monte-Carlo

Search (Monte-Carlo Tree Search)

Enumeration

etc...

Installation