Book title: Foundations of Probabilistic Logic Programming
Languages, Semantics, Inference and Learning
Author: Fabrizio Riguzzi, University of Ferrara, Italy
With a Foreword by Agostino Dovier, University of Udine, Italy
Publisher: River Publishers
Series: River Publishers Series in Software Engineering
ISBN: 9788770220187
e-ISBN: 9788770220170
Get it from:

  1. the publisher
  2. Amazon
  3. electronic version from Google Play
  4. other sellers on Google Books

Sample content: Table of contents, Foreword, Preface, Chapter 2

Bibtex entry

Latex sources of slides for some of the chapters are available from GitHub

Kalman filter in 2D

Abstract

Probabilistic Logic Programming extends Logic Programming by enabling the representation of uncertain information. Probabilistic Logic Programming is at the intersection of two wider research fields: the integration of logic and probability and Probabilistic Programming.

Logic enables the representation of complex relations among entities while probability theory is useful for model uncertainty over attributes and relations. Combining the two is a very active field of study. Probabilistic Programming extends programming languages with probabilistic primitives that can be used to write complex probabilistic models. Algorithms for the inference and learning tasks are then provided automatically by the system.

Probabilistic Logic programming is at the same time a logic language, with its knowledge representation capabilities, and a Turing complete language, with its computation capabilities, thus providing the best of both worlds.

Since its birth, the field of Probabilistic Logic Programming has seen a steady increase of activity, with many proposals for languages and algorithms for inference and learning. Foundations of Probabilistic Logic Programming aims at providing an overview of the field with a special emphasis on languages under the Distribution Semantics, one of the most influential approaches. The book presents the main ideas for semantics, inference, and learning and highlights connections between the methods.

Many examples of the book include a link to a page of the web application http://cplint.eu where the code can be run online.

Keywords: Probabilistic logic programming, statistical relational learning, statistical relational artificial intelligence, distribution semantics, graphical models, artificial intelligence, machine learning

Table of Contents

  1. 1 Preliminaries
    1. 1.1 Orders, Lattices, Ordinals
    2. 1.2 Mappings and Fixpoints
    3. 1.3 Logic Programming
    4. 1.4 Semantics for Normal Logic Programs
      1. 1.4.1 Program Completion
      2. 1.4.2 Well-Founded Semantics
      3. 1.4.3 Stable Model Semantics
    5. 1.5 Probability Theory
    6. 1.6 Probabilistic Graphical Models
  2. 2 Probabilistic Logic Programming Languages
    1. 2.1 Languages with the Distribution Semantics
      1. 2.1.1 Logic Programs with Annotated Disjunctions
      2. 2.1.2 ProbLog
      3. 2.1.3 Probabilistic Horn Abduction
      4. 2.1.4 PRISM
    2. 2.2 The Distribution Semantics for Programs Without Function Symbols
    3. 2.3 Examples of Programs
    4. 2.4 Equivalence of Expressive Power
    5. 2.5 Translation to Bayesian Networks
    6. 2.6 Generality of the Distribution Semantics
    7. 2.7 Extensions of the Distribution Semantics
    8. 2.8 CP-Logic
    9. 2.9 Semantics for Non-Sound Programs
    10. 2.10 KBMC Probabilistic Logic Programming Languages
      1. 2.10.1 Bayesian Logic Programs
      2. 2.10.2 CLP(BN)
      3. 2.10.3 The Prolog Factor Language
    11. 2.11 Other Semantics for Probabilistic Logic Programming
      1. 2.11.1 Stochastic Logic Programs
      2. 2.11.2 ProPPR
    12. 2.12 Other Semantics for Probabilistic Logics
      1. 2.12.1 Nilsson’s Probabilistic Logic
      2. 2.12.2 Markov Logic Networks
        1. 2.12.2.1 Encoding Markov Logic Networks with Probabilistic Logic Programming
      3. 2.12.3 Annotated Probabilistic Logic Programs
  3. 3 Semantics with Function Symbols
    1. 3.1 The Distribution Semantics for Programs with FunctionSymbols
    2. 3.2 Infinite Covering Set of Explanations
    3. 3.3 Comparison with Sato and Kameya's Definition
  4. 4 Semantics for Hybrid Programs
    1. 4.1 Hybrid ProbLog
    2. 4.2 Distributional Clauses
    3. 4.3 Extended PRISM
    4. 4.4 cplint Hybrid Programs
    5. 4.5Probabilistic Constraint Logic Programming
      1. 4.5.1 Dealing with Imprecise Probability Distributions
  5. 5 Exact Inference
    1. 5.1 PRISM
    2. 5.2 Knowledge Compilation
    3. 5.3 ProbLog1
    4. 5.4 cplint
    5. 5.5 SLGAD
    6. 5.6 PITA
    7. 5.7 ProbLog2
    8. 5.8 TP Compilation
    9. 5.9 Modeling Assumptions in PITA
      1. 5.9.1 PITA(OPT)
      2. 5.9.2 MPE with PITA
    10. 5.10 Inference for Queries with an Infinite Number of Explanations
    11. 5.11 Inference for Hybrid Programs
  6. 6 Lifted Inference
    1. 6.1 Preliminaries on Lifted Inference
      1. 6.1.1 Variable Elimination
      2. 6.1.2 GC-FOVE
    2. 6.2 LP2
      1. 6.2.1 Translating ProbLog into PFL
    3. 6.3 Lifted Inference with Aggregation Parfactors
    4. 6.4 Weighted First-Order Model Counting
    5. 6.5 Cyclic Logic Programs
    6. 6.6 Comparison of the Approaches
  7. 7 Approximate Inference
    1. 7.1 ProbLog1
      1. 7.1.1 Iterative Deepening
      2. 7.1.2 k-best
      3. 7.1.3 Monte Carlo
    2. 7.2 MCINTYRE
    3. 7.3 Approximate Inference for Queries with an Infinite Number of Explanations
    4. 7.4 Conditional Approximate Inference
    5. 7.5 Approximate Inference by Sampling for Hybrid Programs
    6. 7.6 Approximate Inference with Bounded Error for Hybrid Programs
    7. 7.7 k-Optimal
    8. 7.8 Explanation-Based Approximate Weighted Model Counting
    9. 7.9 Approximate Inference with TP -compilation
    10. 7.10 DISTR and EXP Tasks
  8. 8 Non-Standard Inference
    1. 8.1 Possibilistic Logic Programming
    2. 8.2 Decision-Theoretic ProbLog
    3. 8.3 Algebraic ProbLog
  9. 9 Parameter Learning
    1. 9.1 PRISM Parameter Learning
    2. 9.2 LLPAD and ALLPAD Parameter Learning
    3. 9.3 LeProbLog
    4. 9.4 EMBLEM
    5. 9.5 ProbLog2 Parameter Learning
    6. 9.6 Parameter Learning for Hybrid Programs
  10. 10 Structure Learning
    1. 10.1 Inductive Logic Programming
    2. 10.2 LLPAD and ALLPAD Structure Learning
    3. 10.3 ProbLog Theory Compression
    4. 10.4 ProbFOIL and ProbFOIL+
    5. 10.5 SLIPCOVER
      1. 10.5.1 The Language Bias
      2. 10.5.2 Description of the Algorithm
        1. 10.5.2.1 Function INITIALBEAMS
        2. 10.5.2.2 Beam Search with Clause Refinements
      3. 10.5.3 Execution Example
    6. 10.6 Examples of Datasets
  11. 11 cplint Examples
    1. 11.1 cplint Commands
    2. 11.2 Natural Language Processing
      1. 11.2.1 Probabilistic Context-Free Grammars
      2. 11.2.2 Probabilistic Left Corner Grammars
      3. 11.2.3 Hidden Markov Models
    3. 11.3 Drawing Binary Decision Diagrams
    4. 11.4 Gaussian Processes
    5. 11.5 Dirichlet Processes
      1. 11.5.1 The Stick-Breaking Process
      2. 11.5.2 The Chinese Restaurant Process
      3. 11.5.3 Mixture Model
    6. 11.6 Bayesian Estimation
    7. 11.7 Kalman Filter
    8. 11.8 Stochastic Logic Programs
    9. 11.9 Tile Map Generation
    10. 11.10 Markov Logic Networks
    11. 11.11 Truel
    12. 11.12 Coupon Collector Problem
    13. 11.13 One-Dimensional Random Walk
    14. 11.14 Latent Dirichlet Allocation
    15. 11.15 The Indian GPA Problem
    16. 11.16 Bongard Problems
  12. 12 Conclusions