Evolutionary Optimization of Model Merging Recipes
Authors: Takuya Akiba, Makoto Shing, Yujin Tang, Qi Sun, David Ha
What
This paper introduces a novel approach, Evolutionary Model Merge, which utilizes evolutionary algorithms to automate the merging of open-source foundation models, enabling the creation of new models with combined capabilities without the need for extensive training.
Why
This paper is important because it presents a more efficient and accessible method for developing foundation models, particularly for specialized domains and non-English languages, by leveraging the collective intelligence of existing open-source models.
How
The authors employ evolutionary algorithms to optimize model merging in two spaces: parameter space (PS) for combining model weights and data flow space (DFS) for optimizing token inference paths through model layers. They demonstrate their method by evolving a Japanese LLM with Math reasoning capabilities and a culturally-aware Japanese VLM.
Result
The evolved Japanese LLM achieves state-of-the-art performance on Japanese LLM benchmarks, surpassing some 70B parameter models despite having only 7B parameters. Similarly, the evolved Japanese VLM excels in handling culturally-specific content, outperforming existing Japanese VLMs on a newly created benchmark.
LF
Limitations include potential for illogical outputs and lack of instruction fine-tuning. Future work involves applying the method to image generation, evolving source model selection, and developing a self-improving swarm of models.
Abstract
We present a novel application of evolutionary algorithms to automate the creation of powerful foundation models. While model merging has emerged as a promising approach for LLM development due to its cost-effectiveness, it currently relies on human intuition and domain knowledge, limiting its potential. Here, we propose an evolutionary approach that overcomes this limitation by automatically discovering effective combinations of diverse open-source models, harnessing their collective intelligence without requiring extensive additional training data or compute. Our approach operates in both parameter space and data flow space, allowing for optimization beyond just the weights of the individual models. This approach even facilitates cross-domain merging, generating models like a Japanese LLM with Math reasoning capabilities. Surprisingly, our Japanese Math LLM achieved state-of-the-art performance on a variety of established Japanese LLM benchmarks, even surpassing models with significantly more parameters, despite not being explicitly trained for such tasks. Furthermore, a culturally-aware Japanese VLM generated through our approach demonstrates its effectiveness in describing Japanese culture-specific content, outperforming previous Japanese VLMs. This work not only contributes new state-of-the-art models back to the open-source community, but also introduces a new paradigm for automated model composition, paving the way for exploring alternative, efficient approaches to foundation model development.