You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+19Lines changed: 19 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,6 +2,7 @@
2
2
3
3
**Compiler-R1** is the **first framework** that combines **Large Language Models (LLMs)** and **Reinforcement Learning (RL)** for compiler pass sequence auto-tuning targeting reducing LLVM IR instruction count. It leverages the reasoning ability of LLMs and the exploration power of RL to efficiently discover high-performance pass sequences.
4
4
5
+

5
6
---
6
7
7
8
## 🌟 Key Features
@@ -93,3 +94,21 @@ bash infer_xxxx.sh
93
94
```
94
95
95
96
⚠️ **Important**: Make sure to correctly set paths inside each inference script (`infer_xxxx.sh`) to point to your trained models and data.
97
+
98
+
## Citation
99
+
If you use Compiler-R1 in your research or find it useful, please cite our paper:
100
+
101
+
```bash
102
+
@misc{pan2025compilerr1agenticcompilerautotuning,
103
+
title={Compiler-R1: Towards Agentic Compiler Auto-tuning with Reinforcement Learning},
104
+
author={Haolin Pan and Hongyu Lin and Haoran Luo and Yang Liu and Kaichun Yao and Libo Zhang and Mingjie Xing and Yanjun Wu},
105
+
year={2025},
106
+
eprint={2506.15701},
107
+
archivePrefix={arXiv},
108
+
primaryClass={cs.LG},
109
+
url={https://arxiv.org/abs/2506.15701}
110
+
}
111
+
```
112
+
113
+
## Acknowledgements
114
+
This repo benefits from [Agent-R1](https://github.com/0russwest0/Agent-R1). Thanks for their wonderful works.
0 commit comments