Foundations of Probabilistic Logic Programming. Languages, Semantics, lnference and Learning
β Scribed by Fabrizio Riguzzi
- Publisher
- River Publishers, Routledge
- Year
- 2023
- Tongue
- English
- Leaves
- 548
- Edition
- 2
- Category
- Library
No coin nor oath required. For personal study only.
β¦ Table of Contents
Cover
Half Title
Series Page
Title Page
Copyright Page
Table of Contents
Foreword
Preface to the 2nd Edition
Preface
Acknowledgments
List of Figures
List of Tables
List of Examples
List of Definitions
List of Theorems
List of Acronyms
Chapter 1: Preliminaries
1.1: Orders, Lattices, Ordinals
1.2: Mappings and Fixpoints
1.3: Logic Programming
1.4: Semantics for Normal Logic Programs
1.4.1: Program completion
1.4.2: Well-founded semantics
1.4.3: Stable model semantics
1.5: Probability Theory
1.6: Probabilistic Graphical Models
Chapter 2: Probabilistic Logic Programming Languages
2.1: Languages with the Distribution Semantics
2.1.1: Logic programs with annotated disjunctions
2.1.2: ProbLog
2.1.3: Probabilistic horn abduction
2.1.4: PRISM
2.2: The Distribution Semantics for Programs Without Function Symbols
2.3: Examples of Programs
2.4: Equivalence of Expressive Power
2.5: Translation into Bayesian Networks
2.6: Generality of the Distribution Semantics
2.7: Extensions of the Distribution Semantics
2.8: CP-logic
2.9: KBMC Probabilistic Logic Programming Languages
2.9.1: Bayesian logic programs
2.9.2: CLP(BN)
2.9.3: The prolog factor language
2.10: Other Semantics for Probabilistic Logic Programming
2.10.1: Stochastic logic programs
2.10.2: ProPPR
2.11: Other Semantics for Probabilistic Logics
2.11.1: Nilssonβs probabilistic logic
2.11.2: Markov logic networks
2.11.2.1: Encoding Markov logic networks with probabilistic logic programming
2.11.3: Annotated probabilistic logic programs
Chapter 3: Semantics with Function Symbols
3.1: The Distribution Semantics for Programs with Function Symbols
3.2: Infinite Covering Set of Explanations
3.3: Comparison with Sato and Kameyaβs Definition
Chapter 4: Hybrid Programs
4.1: Hybrid ProbLog
4.2: Distributional Clauses
4.3: Extended PRISM
4.4: cplint Hybrid Programs
4.5: Probabilistic Constraint Logic Programming
4.5.1: Dealing with imprecise probability distributions
Chapter 5: Semantics for Hybrid Programs with Function Symbols
5.1: Examples of PCLP with Function Symbols
5.2: Preliminaries
5.3: The Semantics of PCLP is Well-defined
Chapter 6: Probabilistic Answer Set Programming
6.1: A Semantics for Unsound Programs
6.2: Features of Answer Set Programming
6.3: Probabilistic Answer Set Programming
Chapter 7: Complexity of Inference
7.1: Inference Tasks
7.2: Background on Complexity Theory
7.3: Complexity for Nonprobabilistic Inference
7.4: Complexity for Probabilistic Programs
7.4.1: Complexity for acyclic and locally stratified programs
7.4.2: Complexity results from [MauΓ‘ and Cozman, 2020]
Chapter 8: Exact Inference
8.1: PRISM
8.2: Knowledge Compilation
8.3: ProbLog1
8.4: cplint
8.5: SLGAD
8.6: PITA
8.7: ProbLog2
8.8: TP Compilation
8.9: MPE and MAP
8.9.1: MAP and MPE in probLog
8.9.2: MAP and MPE in PITA
8.10: Modeling Assumptions in PITA
8.10.1: PITA(OPT)
8.10.2: VIT with PITA
8.11: Inference for Queries with an Infinite Number of Explanations
Chapter 9: Lifted Inference
9.1: Preliminaries on Lifted Inference
9.1.1: Variable elimination
9.1.2: GC-FOVE
9.2: LP2
9.2.1: Translating probLog into PFL
9.3: Lifted Inference with Aggregation Parfactors
9.4: Weighted First-order Model Counting
9.5: Cyclic Logic Programs
9.6: Comparison of the Approaches
Chapter 10: Approximate Inference
10.1: ProbLog1
10.1.1: Iterative deepening
10.1.2: k-best
10.1.3: Monte carlo
10.2: MCINTYRE
10.3: Approximate Inference for Queries with an Infinite Number of Explanations
10.4: Conditional Approximate Inference
10.5: k-optimal
10.6: Explanation-based Approximate Weighted Model Counting
10.7: Approximate Inference with TP-compilation
Chapter 11: Non-standard Inference
11.1: Possibilistic Logic Programming
11.2: Decision-theoretic ProbLog
11.3: Algebraic ProbLog
Chapter 12: Inference for Hybrid Programs
12.1: Inference for Extended PRISM
12.2: Inference with Weighted Model Integration
12.2.1: Weighted Model Integration
12.2.2: Algebraic Model Counting
12.2.2.1: The probability density semiring and WMI
12.2.2.2: Symbo
12.2.2.3: Sampo
12.3: Approximate Inference by Sampling for Hybrid Programs
12.4: Approximate Inference with Bounded Error for Hybrid Programs
12.5: Approximate Inference for the DISTR and EXP Tasks
Chapter 13: Parameter Learning
13.1: PRISM Parameter Learning
13.2: LLPAD and ALLPAD Parameter Learning
13.3: LeProbLog
13.4: EMBLEM
13.5: ProbLog2 Parameter Learning
13.6: Parameter Learning for Hybrid Programs
13.7: DeepProbLog
13.7.1: DeepProbLog inference
13.7.2: Learning in DeepProbLog
Chapter 14: Structure Learning
14.1: Inductive Logic Programming
14.2: LLPAD and ALLPAD Structure Learning
14.3: ProbLog Theory Compression
14.4: ProbFOIL and ProbFOIL+
14.5: SLIPCOVER
14.5.1: The language bias
14.5.2: Description of the algorithm
14.5.2.1: Function INITIALBEAMS
14.5.2.2: Beam search with clause refinements
14.5.3: Execution Example
14.6: Learning the Structure of Hybrid Programs
14.7: Scaling PILP
14.7.1: LIFTCOVER
14.7.1.1: Liftable PLP
14.7.1.2: Parameter learning
14.7.1.3: Structure learning
14.7.2: SLEAHP
14.7.2.1: Hierarchical probabilistic logic programs
14.7.2.2: Parameter learning
14.7.2.3: Structure learning
14.8: Examples of Datasets
Chapter 15: cplint Examples
15.1: cplint Commands
15.2: Natural Language Processing
15.2.1: Probabilistic context-free grammars
15.2.2: Probabilistic left corner grammars
15.2.3: Hidden Markov models
15.3: Drawing Binary Decision Diagrams
15.4: Gaussian Processes
15.5: Dirichlet Processes
15.5.1: The stick-breaking process
15.5.2: The Chinese restaurant process
15.5.3: Mixture model
15.6: Bayesian Estimation
15.7: Kalman Filter
15.8: Stochastic Logic Programs
15.9: Tile Map Generation
15.10: Markov Logic Networks
15.11: Truel
15.12: Coupon Collector Problem
15.13: One-dimensional Random Walk
15.14: Latent Dirichlet Allocation
15.15: The Indian GPA Problem
15.16 Bongard Problems
Chapter 16: Conclusions
Bibliography
Index
About the Author
π SIMILAR VOLUMES
The computational foundations of Artificial Intelligence (AI) are supported by two comer stones: logics and Machine Leaming. Computationallogic has found its realization in a number of frameworks for logic-based approaches to knowledge representation and automated reasoning, such as Logic ProgramΒ m
<P>Managing vagueness/fuzziness is starting to play an important role in Semantic Web research, with a large number of research efforts underway. <STRONG>Foundations of Fuzzy Logic and Semantic Web Languages</STRONG> provides a rigorous and succinct account of the mathematical methods and tools used
<p><span>Semantics of Programming Languages </span><span>exposes the basic motivations and philosophy underlying the applications of semantic techniques in computer science. It introduces the mathematical theory of programming languages with an emphasis on higher-order functions and type systems. De
What does a probabilistic program actually compute? How can one formally reason about such probabilistic programs? This valuable guide covers such elementary questions and more. It provides a state-of-the-art overview of the theoretical underpinnings of modern probabilistic programming and their app