Evolutionary Sparsity Regularisation-based Feature Selection for Binary Classification [0.03%]
基于进化稀疏正则化的二元分类特征选择方法
Bach Hoai Nguyen,Bing Xue,Mengjie Zhang
Bach Hoai Nguyen
In classification, feature selection is an essential pre-processing step that selects a small subset of features to improve classification performance. Existing feature selection approaches can be divided into three main approaches: wrapper...
Landscape Analysis for Surrogate Models in the Evolutionary Black-Box Context [0.03%]
进化黑盒环境下代理模型的景观分析
Zbyněk Pitra,Jan Koza,Jiří Tumpach et al.
Zbyněk Pitra et al.
Surrogate modeling has become a valuable technique for black-box optimization tasks with expensive evaluation of the objective function. In this paper, we investigate the relationships between the predictive accuracy of surrogate models, th...
Using Machine Learning Methods to Assess Module Performance Contribution in Modular Optimization Frameworks [0.03%]
基于机器学习方法的模块性能贡献评估及其在模块化优化框架中的应用研究
Ana Kostovska,Diederick Vermetten,Peter Korošec et al.
Ana Kostovska et al.
Modular algorithm frameworks not only allow for combinations never tested in manually selected algorithm portfolios, but they also provide a structured approach to assess which algorithmic ideas are crucial for the observed performance of a...
Runtime Analysis of Single- and Multi-Objective Evolutionary Algorithms for Chance Constrained Optimization Problems with Normally Distributed Random Variables [0.03%]
基于正态分布随机变量的概率约束优化问题的单目标和多目标进化算法的时间复杂度分析
Frank Neumann,Carsten Witt
Frank Neumann
Chance constrained optimization problems allow to model problems where constraints involving stochastic components should only be violated with a small probability. Evolutionary algorithms have been applied to this scenario and shown to ach...
Large-Scale Multiobjective Evolutionary Algorithm Guided by Low-Dimensional Surrogates of Scalarization Functions [0.03%]
基于标量化函数低维代理模型的多目标演化算法
Haoran Gu,Handing Wang,Cheng He et al.
Haoran Gu et al.
Recently, computationally intensive multiobjective optimization problems have been efficiently solved by surrogate-assisted multiobjective evolutionary algorithms. However, most of those algorithms can only handle no more than 200 decision ...
Hyperparameter Control Using Fuzzy Logic: Evolving Policies for Adaptive Fuzzy Particle Swarm Optimization Algorithm [0.03%]
基于模糊逻辑的超参数控制:自适应模糊粒子群优化算法中演化策略的研究
Nicolas Roy,Charlotte Beauthier,Alexandre Mayer
Nicolas Roy
Heuristic optimization methods such as Particle Swarm Optimization depend on their parameters to achieve optimal performance on a given class of problems. Some modifications of heuristic algorithms aim at adapting those parameters during th...
Virtual Position Guided Strategy for Particle Swarm Optimization Algorithms on Multimodal Problems [0.03%]
用于多模态问题的粒子群优化算法的虚拟位置引导策略
Chao Li,Jun Sun,Li-Wei Li et al.
Chao Li et al.
Premature convergence is a thorny problem for particle swarm optimization (PSO) algorithms, especially on multimodal problems, where maintaining swarm diversity is crucial. However, most enhancement strategies for PSO, including the existin...
Synthesising Diverse and Discriminatory Sets of Instances using Novelty Search in Combinatorial Domains [0.03%]
基于新颖性搜索的组合域中多样且可区分的样本集合的合成方法
Alejandro Marrero,Eduardo Segredo,Coromoto León et al.
Alejandro Marrero et al.
Gathering sufficient instance data to either train algorithm-selection models or understand algorithm footprints within an instance space can be challenging. We propose an approach to generating synthetic instances that are tailored to perf...
A Layered Learning Approach to Scaling in Learning Classifier Systems for Boolean Problems [0.03%]
一种分层学习方法在布尔问题中对学习分类系统进行缩放
Isidro M Alvarez,Trung B Nguyen,Will N Browne et al.
Isidro M Alvarez et al.
Evolutionary Computation (EC) often throws away learned knowledge as it is reset for each new problem addressed. Conversely, humans can learn from small-scale problems, retain this knowledge (plus functionality) and then successfully reuse ...
Marc Kaufmann,Maxime Larcher,Johannes Lengler et al.
Marc Kaufmann et al.
We study the (1:s+1) success rule for controlling the population size of the (1,λ)- EA. It was shown by Hevia Fajardo and Sudholt that this parameter control mechanism can run into problems for large s if the fitness landscape is too easy....