TasksSotADatasetsPapersMethodsSubmitAbout
Papers With Code 2

A community resource for machine learning research: papers, code, benchmarks, and state-of-the-art results.

Explore

Notable BenchmarksAll SotADatasetsPapersMethods

Community

Submit ResultsAbout

Data sourced from the PWC Archive (CC-BY-SA 4.0). Built by the community, for the community.

Methods/PO

PO

Parrot optimizer: Algorithm and applications to medical problems

GeneralIntroduced 200037 papers

Description

Stochastic optimization methods have gained significant prominence as effective techniques in contemporary research, addressing complex optimization challenges efficiently. This paper introduces the Parrot Optimizer (PO), an efficient optimization method inspired by key behaviors observed in trained Pyrrhura Molinae parrots. The study features qualitative analysis and comprehensive experiments to showcase the distinct characteristics of the Parrot Optimizer in handling various optimization problems. Performance evaluation involves benchmarking the proposed PO on 35 functions, encompassing classical cases and problems from the IEEE CEC 2022 test sets, and comparing it with eight popular algorithms. The results vividly highlight the competitive advantages of the PO in terms of its exploratory and exploitative traits. Furthermore, parameter sensitivity experiments explore the adaptability of the proposed PO under varying configurations. The developed PO demonstrates effectiveness and superiority when applied to engineering design problems. To further extend the assessment to real-world applications, we included the application of PO to disease diagnosis and medical image segmentation problems, which are highly relevant and significant in the medical field. In conclusion, the findings substantiate that the PO is a promising and competitive algorithm, surpassing some existing algorithms in the literature. The supplementary files and open source codes of the proposed Parrot Optimizer (PO) is available at https://aliasgharheidari.com/PO.html

Papers Using This Method

DCRM: A Heuristic to Measure Response Pair Quality in Preference Optimization2025-06-17Zeroth-Order Optimization is Secretly Single-Step Policy Optimization2025-06-17Hybrid Meta-learners for Estimating Heterogeneous Treatment Effects2025-06-16Self-NPO: Negative Preference Optimization of Diffusion Models by Simply Learning from Itself without Explicit Preference Annotations2025-05-17SoLoPO: Unlocking Long-Context Capabilities in LLMs via Short-to-Long Preference Optimization2025-05-16Subspace-Based Super-Resolution Sensing for Bi-Static ISAC with Clock Asynchronism2025-05-15Robust Markov stability for community detection at a scale learned based on the structure2025-04-15A Hybrid Model/Data-Driven Solution to Channel, Position and Orientation Tracking in mmWave Vehicular Systems2025-03-07D2S-FLOW: Automated Parameter Extraction from Datasheets for SPICE Model Generation Using Large Language Models2025-02-23The devasting economic impact of Callinectes sapidus on the clam fishing in the Po Delta (Italy): Striking evidence from novel field data2025-02-10Preference Optimization via Contrastive Divergence: Your Reward Model is Secretly an NLL Estimator2025-02-06DFF: Decision-Focused Fine-tuning for Smarter Predict-then-Optimize with Limited Data2025-01-03Applying the maximum entropy principle to neural networks enhances multi-species distribution models2024-12-26Distributionally Robust Performative Prediction2024-12-05Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization2024-11-15Aligning Visual Contrastive learning models via Preference Optimization2024-11-12Learning Loss Landscapes in Preference Optimization2024-11-10On Diffusion Models for Multi-Agent Partial Observability: Shared Attractors, Error Bounds, and Composite Flow2024-10-17SparsePO: Controlling Preference Alignment of LLMs via Sparse Token Masks2024-10-07Post-edits Are Preferences Too2024-10-03