Mickaël Binois,Nicholson Collier,Jonathan Ozik
Mickaël Binois
One way to reduce the time of conducting optimization studies is to evaluate designs in parallel rather than just one-at-a-time. For expensive-to-evaluate black-boxes, batch versions of Bayesian optimization have been proposed. They work by...
Christopher Amato,George Konidaris,Leslie P Kaelbling et al.
Christopher Amato et al.
Decentralized partially observable Markov decision processes (Dec-POMDPs) are general models for decentralized multi-agent decision making under uncertainty. However, they typically model a problem at a low level of granularity, where each ...
Facundo Bromberg,Dimitris Margaritis,Vasant Honavar
Facundo Bromberg
We present two algorithms for learning the structure of a Markov network from data: GSMN* and GSIMN. Both algorithms use statistical independence tests to infer the structure by successively constraining the set of structures consistent wit...
Robert Mateescu,Kalev Kask,Vibhav Gogate et al.
Robert Mateescu et al.
The paper investigates parameterized approximate message-passing schemes that are based on bounded inference and are inspired by Pearl's belief propagation algorithm (BP). We start with the bounded inference mini-clustering algorithm and th...
Stéphane Ross,Joelle Pineau,Sébastien Paquet et al.
Stéphane Ross et al.
Partially Observable Markov Decision Processes (POMDPs) provide a rich framework for sequential decision-making under uncertainty in stochastic domains. However, solving a POMDP is often intractable except for small problems due to their co...