-
Notifications
You must be signed in to change notification settings - Fork 1
Expand file tree
/
Copy pathextracted_excerpts.json
More file actions
2586 lines (2586 loc) · 214 KB
/
extracted_excerpts.json
File metadata and controls
2586 lines (2586 loc) · 214 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
{
"stats": {
"total": 286,
"has_contribution_marker": 1,
"has_background_marker": 0,
"has_method_marker": 0,
"extracted_from_contribution": 1,
"full_excerpt": 285
},
"papers": [
{
"url": "/papers/llm/algorithm/architecture/attention/2025/06/01/gated-attention-for-large-language-models-non-linearity-sparsity-and-attention-sink-free.html",
"title": "Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free",
"raw_excerpt": "本文系统性地研究了在标准 Softmax 注意力机制中引入门控(gating)机制的影响。",
"extracted_excerpt": "本文系统性地研究了在标准 Softmax 注意力机制中引入门控(gating)机制的影响。",
"excerpt_length": 45,
"extracted_length": 45,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/linear/2023/12/01/gated-linear-attention-transformers-with-hardware-efficient-training.html",
"title": "Gated Linear Attention Transformers with Hardware-Efficient Training",
"raw_excerpt": "本文致力于解决现有线性注意力模型相较于标准Softmax注意力在性能和实际运行速度上的不足。",
"extracted_excerpt": "本文致力于解决现有线性注意力模型相较于标准Softmax注意力在性能和实际运行速度上的不足。",
"excerpt_length": 46,
"extracted_length": 46,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/linear/2022/01/01/transformer-quality-in-linear-time.html",
"title": "Transformer Quality in Linear Time",
"raw_excerpt": "核心问题: 尽管Transformer模型【35, Vaswani, A. 等人, Attention is all you need, NIPS 201...",
"extracted_excerpt": "核心问题: 尽管Transformer模型【35, Vaswani, A. 等人, Attention is all you need, NIPS 201...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/linear/2022/10/01/the-devil-in-linear-transformer.html",
"title": "The Devil in Linear Transformer",
"raw_excerpt": "本文旨在解决现有基于核(kernel-based)的线性 Transformer 相较于标准 Transformer 性能下降的问题。作者通过深入分析,指...",
"extracted_excerpt": "本文旨在解决现有基于核(kernel-based)的线性 Transformer 相较于标准 Transformer 性能下降的问题。作者通过深入分析,指...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/linear/2025/08/01/artificial-hippocampus-networks-for-efficient-long-context-modeling.html",
"title": "Artificial Hippocampus Networks for Efficient Long-Context Modeling",
"raw_excerpt": "核心问题: 长序列建模面临一个根本性的权衡:一方面是类RNN模型中压缩性固定大小内存的高效率,另一方面是基于注意力机制的Transformer中无损增长内...",
"extracted_excerpt": "核心问题: 长序列建模面临一个根本性的权衡:一方面是类RNN模型中压缩性固定大小内存的高效率,另一方面是基于注意力机制的Transformer中无损增长内...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/linear/2021/06/01/linear-transformers-are-secretly-fast-weight-programmers.html",
"title": "Linear Transformers Are Secretly Fast Weight Programmers",
"raw_excerpt": "本文的核心在于揭示了线性化自注意力机制与20世纪90年代初的快速权重编程器(Fast Weight Programmers, FWP)在形式上的等价性。基...",
"extracted_excerpt": "本文的核心在于揭示了线性化自注意力机制与20世纪90年代初的快速权重编程器(Fast Weight Programmers, FWP)在形式上的等价性。基...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/linear/2024/04/01/linear-attention-sequence-parallelism.html",
"title": "Linear Attention Sequence Parallelism",
"raw_excerpt": "本文针对线性序列建模方法(如线性注意力)提出了一种名为线性注意力序列并行(Linear Attention Sequence Parallelism, L...",
"extracted_excerpt": "本文针对线性序列建模方法(如线性注意力)提出了一种名为线性注意力序列并行(Linear Attention Sequence Parallelism, L...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/linear/2021/02/01/learning-associative-inference-using-fast-weight-memory.html",
"title": "LEARNING ASSOCIATIVE INFERENCE USING FAST WEIGHT MEMORY",
"raw_excerpt": "核心问题:\n现代深度神经网络(NNs)尽管在许多人工智能问题上取得了成功,但在需要组合不同经验中提取的特征并进行关联(即组合泛化)的情境中表现不佳。例如,...",
"extracted_excerpt": "核心问题:\n现代深度神经网络(NNs)尽管在许多人工智能问题上取得了成功,但在需要组合不同经验中提取的特征并进行关联(即组合泛化)的情境中表现不佳。例如,...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/linear/2025/01/01/minimax-01-scaling-foundation-models-with-lightning-attention.html",
"title": "MiniMax-01: Scaling Foundation Models with Lightning Attention",
"raw_excerpt": "本文旨在构建一个性能媲美顶尖商业模型,同时上下文窗口长度大一个数量级的模型。这一目标需要在网络架构、数据和计算之间进行仔细权衡。",
"extracted_excerpt": "本文旨在构建一个性能媲美顶尖商业模型,同时上下文窗口长度大一个数量级的模型。这一目标需要在网络架构、数据和计算之间进行仔细权衡。",
"excerpt_length": 64,
"extracted_length": 64,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/linear/2024/05/01/various-lengths-constant-speed-efficient-language-modeling-with-lightning-attention.html",
"title": "Various Lengths, Constant Speed: Efficient Language Modeling with Lightning Attention",
"raw_excerpt": "本文旨在解决现有线性注意力机制在大型语言模型中未被广泛采用的两个核心问题:1) 性能不佳:与顶尖的基于Softmax注意力的模型相比,存在明显的性能差距;...",
"extracted_excerpt": "本文旨在解决现有线性注意力机制在大型语言模型中未被广泛采用的两个核心问题:1) 性能不佳:与顶尖的基于Softmax注意力的模型相比,存在明显的性能差距;...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/linear/2023/07/01/scaling-transnormer-to-175-billion-parameters.html",
"title": "Scaling TransNormer to 175 Billion Parameters",
"raw_excerpt": "本文旨在解决传统Transformer模型中softmax注意力机制带来的二次方时间复杂度问题,该问题限制了模型在训练和推理阶段的可扩展性和效率。尽管已有...",
"extracted_excerpt": "本文旨在解决传统Transformer模型中softmax注意力机制带来的二次方时间复杂度问题,该问题限制了模型在训练和推理阶段的可扩展性和效率。尽管已有...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/linear/2024/01/01/lightning-attention-2-a-free-lunch-for-handling-unlimited-sequence-lengths-in-large-language-models.html",
"title": "Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models",
"raw_excerpt": "核心问题\nTransformer架构的计算复杂度随着输入序列的长度呈二次方增长,这使得处理极长序列变得具有挑战性。尽管线性注意力理论上通过核技巧可以将计算...",
"extracted_excerpt": "核心问题\nTransformer架构的计算复杂度随着输入序列的长度呈二次方增长,这使得处理极长序列变得具有挑战性。尽管线性注意力理论上通过核技巧可以将计算...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/linear/2020/06/01/transformers-are-rnns-fast-autoregressive-transformers-with-linear-attention.html",
"title": "Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention",
"raw_excerpt": "核心问题:\n标准的Transformer模型虽然在多种任务中表现出色,但其核心组件自注意力(self-attention)的计算和内存复杂性与输入序列长度...",
"extracted_excerpt": "核心问题:\n标准的Transformer模型虽然在多种任务中表现出色,但其核心组件自注意力(self-attention)的计算和内存复杂性与输入序列长度...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/ssm/2021/10/01/combining-recurrent-convolutional-and-continuous-time-models-with-linear-state-space-layers.html",
"title": "Combining Recurrent, Convolutional, and Continuous-time Models with Linear State-Space Layers",
"raw_excerpt": "本文旨在解决机器学习中高效建模长序列数据(超过几千个时间步)的挑战。传统序列模型范式,如循环神经网络(RNNs)、卷积神经网络(CNNs)和神经微分方程(...",
"extracted_excerpt": "本文旨在解决机器学习中高效建模长序列数据(超过几千个时间步)的挑战。传统序列模型范式,如循环神经网络(RNNs)、卷积神经网络(CNNs)和神经微分方程(...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/ssm/2019/01/01/transformer-xl-attentive-language-models-beyond-a-fixed-length-context.html",
"title": "Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context",
"raw_excerpt": "核心问题:尽管Transformer在学习长期依赖方面具有潜力,但在语言建模中,它们受到固定长度上下文的限制。这种方法存在两个关键问题:1) 模型无法捕获...",
"extracted_excerpt": "核心问题:尽管Transformer在学习长期依赖方面具有潜力,但在语言建模中,它们受到固定长度上下文的限制。这种方法存在两个关键问题:1) 模型无法捕获...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/ssm/2024/12/01/gated-delta-networks-improving-mamba2-with-delta-rule.html",
"title": "GATED DELTA NETWORKS: IMPROVING MAMBA2 WITH DELTA RULE",
"raw_excerpt": "本文旨在解决线性Transformer在检索和长上下文任务中性能受限的问题。尽管线性Transformer作为标准Transformer的高效替代品备受关...",
"extracted_excerpt": "本文旨在解决线性Transformer在检索和长上下文任务中性能受限的问题。尽管线性Transformer作为标准Transformer的高效替代品备受关...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/ssm/2024/03/01/jamba-a-hybrid-transformer-mamba-language-model.html",
"title": "Jamba: A Hybrid Transformer-Mamba Language Model",
"raw_excerpt": "本文介绍了Jamba,一种新型的、公开可用的大型语言模型。Jamba基于一种创新的混合架构,该架构融合了Transformer层、Mamba层(一种最新的...",
"extracted_excerpt": "本文介绍了Jamba,一种新型的、公开可用的大型语言模型。Jamba基于一种创新的混合架构,该架构融合了Transformer层、Mamba层(一种最新的...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/ssm/2023/12/01/mamba-linear-time-sequence-modeling-with-selective-state-spaces.html",
"title": "Mamba: Linear-Time Sequence Modeling with Selective State Spaces",
"raw_excerpt": "本文旨在解决现有主流基础模型(Foundation Models)骨干架构 Transformer 在处理长序列时计算效率低下的问题。Transforme...",
"extracted_excerpt": "本文旨在解决现有主流基础模型(Foundation Models)骨干架构 Transformer 在处理长序列时计算效率低下的问题。Transforme...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/ssm/2024/05/01/transformers-are-ssms-generalized-models-and-efficient-algorithms-through-structured-state-space-duality.html",
"title": "Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality",
"raw_excerpt": "本文的核心目标是建立结构化状态空间模型(SSMs)与多种注意力变体之间的丰富理论联系,从而将为Transformer开发的算法和系统优化迁移到SSM上,构...",
"extracted_excerpt": "本文的核心目标是建立结构化状态空间模型(SSMs)与多种注意力变体之间的丰富理论联系,从而将为Transformer开发的算法和系统优化迁移到SSM上,构...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/ssm/2023/04/01/hungry-hungry-hippos-towards-language-modeling-with-state-space-models.html",
"title": "Hungry Hungry Hippos: Towards Language Modeling with State Space Models",
"raw_excerpt": "本文旨在解决状态空间模型(SSM)在语言建模领域相较于Transformer存在的两个核心问题:模型表达能力不足和硬件利用率低下导致的训练速度慢。",
"extracted_excerpt": "本文旨在解决状态空间模型(SSM)在语言建模领域相较于Transformer存在的两个核心问题:模型表达能力不足和硬件利用率低下导致的训练速度慢。",
"excerpt_length": 73,
"extracted_length": 73,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/ssm/2021/10/01/efficiently-modeling-long-sequences-with-structured-state-spaces.html",
"title": "Efficiently Modeling Long Sequences with Structured State Spaces",
"raw_excerpt": "本文旨在解决序列建模中的一个核心问题:高效处理包含长距离依赖(LRDs)的数据。现有的主流模型,如循环神经网络(RNNs)、卷积神经网络(CNNs)和Tr...",
"extracted_excerpt": "本文旨在解决序列建模中的一个核心问题:高效处理包含长距离依赖(LRDs)的数据。现有的主流模型,如循环神经网络(RNNs)、卷积神经网络(CNNs)和Tr...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/ssm/2023/03/01/simplified-state-space-layers-for-sequence-modeling.html",
"title": "SIMPLIFIED STATE SPACE LAYERS FOR SEQUENCE MODELING",
"raw_excerpt": "本文旨在解决机器学习中高效建模长序列的挑战性问题,即序列中相隔数千个时间步的观测值可能共同编码了解决任务的关键信息。尽管已有利普希茨循环神经网络(RNN)...",
"extracted_excerpt": "本文旨在解决机器学习中高效建模长序列的挑战性问题,即序列中相隔数千个时间步的观测值可能共同编码了解决任务的关键信息。尽管已有利普希茨循环神经网络(RNN)...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/ssm/2024/02/01/moe-mamba-efficient-selective-state-space-models-with-mixture-of-experts.html",
"title": "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts",
"raw_excerpt": "本文旨在解决现有大型语言模型(LLM)架构的局限性,并探索进一步扩展语言模型的可能性。当前LLM主要依赖于Transformer架构【65, Vaswan...",
"extracted_excerpt": "本文旨在解决现有大型语言模型(LLM)架构的局限性,并探索进一步扩展语言模型的可能性。当前LLM主要依赖于Transformer架构【65, Vaswan...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/sparsity/2020/04/01/longformer-the-long-document-transformer.html",
"title": "Longformer: The Long-Document Transformer",
"raw_excerpt": "本文旨在解决基于Transformer的模型因其自注意力操作(其计算量与序列长度成二次方关系)而无法处理长序列的局限性。",
"extracted_excerpt": "本文旨在解决基于Transformer的模型因其自注意力操作(其计算量与序列长度成二次方关系)而无法处理长序列的局限性。",
"excerpt_length": 60,
"extracted_length": 60,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/sparsity/2025/02/01/moba-mixture-of-block-attention-for-long-context-llms.html",
"title": "MOBA: MIXTURE OF BLOCK ATTENTION FOR LONG-CONTEXT LLMS",
"raw_excerpt": "本文旨在解决大语言模型(LLM)在处理长序列时面临的核心挑战,即传统自注意力机制带来的二次方计算复杂度增长问题。现有方法通常引入强结构偏见(如窗口注意力)...",
"extracted_excerpt": "本文旨在解决大语言模型(LLM)在处理长序列时面临的核心挑战,即传统自注意力机制带来的二次方计算复杂度增长问题。现有方法通常引入强结构偏见(如窗口注意力)...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/architecture/attention/sparsity/2025/02/01/native-sparse-attention-hardware-aligned-and-natively-trainable-sparse-attention.html",
"title": "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention",
"raw_excerpt": "本文提出了一种名为NSA(Natively trainable Sparse Attention,原生可训练稀疏注意力)的机制,旨在解决长上下文语言模型中...",
"extracted_excerpt": "本文提出了一种名为NSA(Natively trainable Sparse Attention,原生可训练稀疏注意力)的机制,旨在解决长上下文语言模型中...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/agent/2025/04/01/mem0-building-production-ready-ai-agents-with-scalable-long-term-memory.html",
"title": "Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory",
"raw_excerpt": "本文旨在解决大型语言模型(LLMs)因固定上下文窗口而在跨会话的长时间对话中难以保持连贯性的根本问题。为应对这一挑战,研究者引入了名为 Mem0 的新型记...",
"extracted_excerpt": "本文旨在解决大型语言模型(LLMs)因固定上下文窗口而在跨会话的长时间对话中难以保持连贯性的根本问题。为应对这一挑战,研究者引入了名为 Mem0 的新型记...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/agent/2025/03/01/search-r1-training-llms-to-reason-and-leverage-search-engines-with-reinforcement-learning.html",
"title": "Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning",
"raw_excerpt": "本文旨在解决大型语言模型(LLMs)在处理复杂推理和获取最新外部信息时面临的挑战。现有的方法,如检索增强生成(RAG)或将搜索引擎视为工具的提示方法,通常...",
"extracted_excerpt": "本文旨在解决大型语言模型(LLMs)在处理复杂推理和获取最新外部信息时面临的挑战。现有的方法,如检索增强生成(RAG)或将搜索引擎视为工具的提示方法,通常...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/agent/2022/01/01/chain-of-thought-prompting-elicits-reasoning-in-large-language-models.html",
"title": "Chain-of-Thought Prompting Elicits Reasoning in Large Language Models",
"raw_excerpt": "本文探讨了一种名为“思维链(Chain-of-Thought, CoT)”提示的方法,该方法通过生成一系列中间推理步骤,显著提高了大型语言模型执行复杂推理...",
"extracted_excerpt": "本文探讨了一种名为“思维链(Chain-of-Thought, CoT)”提示的方法,该方法通过生成一系列中间推理步骤,显著提高了大型语言模型执行复杂推理...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/agent/2024/09/01/large-language-model-based-agents-for-software-engineering-a-survey.html",
"title": "Large Language Model-Based Agents for Software Engineering: A Survey",
"raw_excerpt": "本文首次对106篇将基于大型语言模型(LLM)的智能体应用于软件工程(SE)的论文进行了全面综述。研究从软件工程和智能体两个视角,分析了现有基于LLM的智...",
"extracted_excerpt": "本文首次对106篇将基于大型语言模型(LLM)的智能体应用于软件工程(SE)的论文进行了全面综述。研究从软件工程和智能体两个视角,分析了现有基于LLM的智...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/agent/2023/10/01/fireact-toward-language-agent-fine-tuning.html",
"title": "FIREACT: TOWARD LANGUAGE AGENT FINE-TUNING",
"raw_excerpt": "本文探讨了通过微调语言模型(LMs)来构建语言代理这一被忽视的研究方向。",
"extracted_excerpt": "本文探讨了通过微调语言模型(LMs)来构建语言代理这一被忽视的研究方向。",
"excerpt_length": 36,
"extracted_length": 36,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/agent/2023/12/01/retrieval-augmented-generation-for-large-language-models-a-survey.html",
"title": "Retrieval-Augmented Generation for Large Language Models: A Survey",
"raw_excerpt": "本文对检索增强生成(RAG)领域进行了系统性的回顾和梳理。其主要贡献如下:",
"extracted_excerpt": "",
"excerpt_length": 37,
"extracted_length": 0,
"is_extracted": true
},
{
"url": "/papers/llm/algorithm/agent/2023/11/01/reac-t-synergizing-reasoning-and-acting-in-language-models.html",
"title": "REAC T: SYNERGIZING REASONING AND ACTING IN LANGUAGE MODELS",
"raw_excerpt": "本文探讨了如何利用大型语言模型(LLMs)以交错的方式生成推理轨迹和任务特定动作,从而实现两者之间更大的协同作用。核心问题在于,现有工作中,LLMs 的推...",
"extracted_excerpt": "本文探讨了如何利用大型语言模型(LLMs)以交错的方式生成推理轨迹和任务特定动作,从而实现两者之间更大的协同作用。核心问题在于,现有工作中,LLMs 的推...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/agent/2025/09/01/the-landscape-of-agentic-reinforcement-learning-for-llms-a-survey.html",
"title": "The Landscape of Agentic Reinforcement Learning for LLMs: A Survey",
"raw_excerpt": "本文系统性地综述了“智能体强化学习”(Agentic RL)这一新兴范式,该范式将大型语言模型(LLM)从被动的序列生成器转变为嵌入在复杂动态世界中的自主...",
"extracted_excerpt": "本文系统性地综述了“智能体强化学习”(Agentic RL)这一新兴范式,该范式将大型语言模型(LLM)从被动的序列生成器转变为嵌入在复杂动态世界中的自主...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/agent/2023/02/01/toolformer-language-models-can-teach-themselves-to-use-tools.html",
"title": "Toolformer: Language Models Can Teach Themselves to Use Tools",
"raw_excerpt": "核心问题: 大型语言模型(LLMs)虽然在许多自然语言处理任务上表现出色,但仍存在一些固有限制,即使通过进一步扩大模型规模也难以完全解决。这些限制包括:\n...",
"extracted_excerpt": "核心问题: 大型语言模型(LLMs)虽然在许多自然语言处理任务上表现出色,但仍存在一些固有限制,即使通过进一步扩大模型规模也难以完全解决。这些限制包括:\n...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/agent/2025/03/01/r1-searcher-incentivizing-the-search-capability-in-llms-via-reinforcement-learning.html",
"title": "R1-Searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning",
"raw_excerpt": "本文旨在解决现有大型推理模型(LRM)在处理对时间敏感或知识密集型问题时,因依赖内部知识而导致的不准确和幻觉问题。尽管检索增强生成(RAG)技术被广泛研究...",
"extracted_excerpt": "本文旨在解决现有大型推理模型(LRM)在处理对时间敏感或知识密集型问题时,因依赖内部知识而导致的不准确和幻觉问题。尽管检索增强生成(RAG)技术被广泛研究...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/agent/2025/05/01/a-survey-on-test-time-scaling-in-large-language-models-what-how-where-and-how-well.html",
"title": "A Survey on Test-Time Scaling in Large Language Models: What, How, Where, and How Well",
"raw_excerpt": "核心问题:随着预训练阶段通过扩展计算(数据和参数)带来的性能提升逐渐放缓,研究重心已转向如何在测试时充分激发大型语言模型(LLMs)中编码的智能,以最大化...",
"extracted_excerpt": "核心问题:随着预训练阶段通过扩展计算(数据和参数)带来的性能提升逐渐放缓,研究重心已转向如何在测试时充分激发大型语言模型(LLMs)中编码的智能,以最大化...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/agent/2023/03/01/reac-t-synergizing-reasoning-and-acting-in-language-models.html",
"title": "REAC T: SYNERGIZING REASONING AND ACTING IN LANGUAGE MODELS",
"raw_excerpt": "本文探讨了如何利用大型语言模型(LLMs)以交错的方式生成推理轨迹和任务特定动作,从而实现两者之间更强的协同作用。核心问题在于,先前研究主要将LLMs的推...",
"extracted_excerpt": "本文探讨了如何利用大型语言模型(LLMs)以交错的方式生成推理轨迹和任务特定动作,从而实现两者之间更强的协同作用。核心问题在于,先前研究主要将LLMs的推...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/agent/2023/12/01/tree-of-thoughts-deliberate-problem-solving-with-large-language-models.html",
"title": "Tree of Thoughts: Deliberate Problem Solving with Large Language Models",
"raw_excerpt": "本文旨在解决大型语言模型(LMs)在推理时局限于逐个词元(token-level)、从左到右决策过程的问题,这一机制在需要探索、策略性前瞻或初始决策至关重...",
"extracted_excerpt": "本文旨在解决大型语言模型(LMs)在推理时局限于逐个词元(token-level)、从左到右决策过程的问题,这一机制在需要探索、策略性前瞻或初始决策至关重...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/agent/2022/07/01/webshop-towards-scalable-real-world-web-interaction-with-grounded-language-agents.html",
"title": "WebShop: Towards Scalable Real-World Web Interaction with Grounded Language Agents",
"raw_excerpt": "核心问题与研究目标: 近期的自然语言处理(NLP)和强化学习(RL)在能够利用语言上下文进行序列决策的智能体方面取得了进展。然而,一方面,这些交互式任务在...",
"extracted_excerpt": "核心问题与研究目标: 近期的自然语言处理(NLP)和强化学习(RL)在能够利用语言上下文进行序列决策的智能体方面取得了进展。然而,一方面,这些交互式任务在...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/agent/2023/09/01/tree-of-thoughts-deliberate-problem-solving-with-large-language-models.html",
"title": "Tree of Thoughts: Deliberate Problem Solving with Large Language Models",
"raw_excerpt": "本文旨在解决大型语言模型(LMs)在推理时局限于逐个词元(token-level)、从左到右决策过程的问题,这一机制在需要探索、策略性前瞻或初始决策至关重...",
"extracted_excerpt": "本文旨在解决大型语言模型(LMs)在推理时局限于逐个词元(token-level)、从左到右决策过程的问题,这一机制在需要探索、策略性前瞻或初始决策至关重...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2023/05/01/direct-preference-optimization-your-language-model-is-secretly-a-reward-model.html",
"title": "Direct Preference Optimization: Your Language Model is Secretly a Reward Model",
"raw_excerpt": "图1:DPO在避免强化学习的同时优化人类偏好。现有的使用人类反馈微调语言模型的方法首先将奖励模型拟合到提示和人类对响应对的偏好数据集上,然后使用RL找到一...",
"extracted_excerpt": "图1:DPO在避免强化学习的同时优化人类偏好。现有的使用人类反馈微调语言模型的方法首先将奖励模型拟合到提示和人类对响应对的偏好数据集上,然后使用RL找到一...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2016/01/01/mastering-the-game-of-go-with-deep-neural-networks-and-tree-search.html",
"title": "Mastering the game of Go with deep neural networks and tree search",
"raw_excerpt": "本文针对围棋游戏的巨大搜索空间和棋盘位置评估难度,提出了一种新的计算机围棋方法,使用“价值网络”评估棋盘位置和“策略网络”选择走法。这些深度神经网络通过从...",
"extracted_excerpt": "本文针对围棋游戏的巨大搜索空间和棋盘位置评估难度,提出了一种新的计算机围棋方法,使用“价值网络”评估棋盘位置和“策略网络”选择走法。这些深度神经网络通过从...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2025/05/01/the-entropy-mechanism-of-reinforcement-learning-for-reasoning-language-models.html",
"title": "The Entropy Mechanism of Reinforcement Learning for Reasoning Language Models",
"raw_excerpt": "本文旨在克服在扩展强化学习(RL)以用于大型语言模型(LLMs)推理时的一个主要障碍,即策略熵的崩溃。",
"extracted_excerpt": "本文旨在克服在扩展强化学习(RL)以用于大型语言模型(LLMs)推理时的一个主要障碍,即策略熵的崩溃。",
"excerpt_length": 51,
"extracted_length": 51,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2024/08/01/scaling-llm-test-time-compute-optimally-can-be-more-effective-than-scaling-model-parameters.html",
"title": "Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters",
"raw_excerpt": "本文旨在探讨如何通过增加测试时(test-time)的计算量来提升大型语言模型(LLM)的性能,特别是针对具有挑战性的任务。研究的核心问题是:在给定固定的...",
"extracted_excerpt": "本文旨在探讨如何通过增加测试时(test-time)的计算量来提升大型语言模型(LLM)的性能,特别是针对具有挑战性的任务。研究的核心问题是:在给定固定的...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2022/03/01/training-language-models-to-follow-instructions-with-human-feedback.html",
"title": "Training language models to follow instructions with human feedback",
"raw_excerpt": "本文旨在解决大型语言模型(LMs)本质上并不擅长遵循用户意图的问题,即模型与用户不“对齐”(aligned)。大型语言模型可能会生成不真实、有毒或对用户无...",
"extracted_excerpt": "本文旨在解决大型语言模型(LMs)本质上并不擅长遵循用户意图的问题,即模型与用户不“对齐”(aligned)。大型语言模型可能会生成不真实、有毒或对用户无...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2024/01/01/self-play-fine-tuning-converts-weak-language-models-to-strong-language-models.html",
"title": "Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models",
"raw_excerpt": "本文研究了如何在不获取额外人工标注数据的情况下,将一个弱大型语言模型(LLM)提升为一个强LLM。",
"extracted_excerpt": "本文研究了如何在不获取额外人工标注数据的情况下,将一个弱大型语言模型(LLM)提升为一个强LLM。",
"excerpt_length": 49,
"extracted_length": 49,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2024/07/01/helpsteer2-open-source-dataset-for-training-top-performing-reward-models.html",
"title": "HelpSteer2: Open-source dataset for training top-performing reward models",
"raw_excerpt": "本文介绍并发布了HelpSteer2,一个高质量、遵循宽松许可(CC-BY-4.0)的偏好数据集,旨在解决当前大型语言模型(LLM)对齐领域中高质量、开放...",
"extracted_excerpt": "本文介绍并发布了HelpSteer2,一个高质量、遵循宽松许可(CC-BY-4.0)的偏好数据集,旨在解决当前大型语言模型(LLM)对齐领域中高质量、开放...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2017/12/01/rllib-abstractions-for-distributed-reinforcement-learning.html",
"title": "RLlib: Abstractions for Distributed Reinforcement Learning",
"raw_excerpt": "本文旨在解决强化学习(RL)领域在系统和抽象设计方面进展缓慢的问题,与深度学习(DL)领域形成鲜明对比。尽管RL社区受益于DL的系统进步,但RL算法固有的...",
"extracted_excerpt": "本文旨在解决强化学习(RL)领域在系统和抽象设计方面进展缓慢的问题,与深度学习(DL)领域形成鲜明对比。尽管RL社区受益于DL的系统进步,但RL算法固有的...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2025/08/01/on-policy-rl-meets-off-policy-experts-harmonizing-supervised-fine-tuning-and-reinforcement-learning-via-dynamic-weighting.html",
"title": "ON-POLICY RL MEETS OFF-POLICY EXPERTS: HARMONIZING SUPERVISED FINE-TUNING AND REINFORCEMENT LEARNING VIA DYNAMIC WEIGHTING",
"raw_excerpt": "本文旨在解决整合监督微调(SFT)和强化学习(RL)时遇到的挑战,特别是现有方法可能破坏模型已建立的模式并导致对专家数据的过拟合。",
"extracted_excerpt": "本文旨在解决整合监督微调(SFT)和强化学习(RL)时遇到的挑战,特别是现有方法可能破坏模型已建立的模式并导致对专家数据的过拟合。",
"excerpt_length": 65,
"extracted_length": 65,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2023/10/01/steerlm-attribute-conditioned-sft-as-an-user-steerable-alternative-to-rlhf.html",
"title": "SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF",
"raw_excerpt": "本文旨在解决现有大语言模型(LLM)对齐方法(特别是基于人类反馈的强化学习,RLHF)的局限性。",
"extracted_excerpt": "本文旨在解决现有大语言模型(LLM)对齐方法(特别是基于人类反馈的强化学习,RLHF)的局限性。",
"excerpt_length": 48,
"extracted_length": 48,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2025/05/01/dapo-an-open-source-llm-reinforcement-learning-system-at-scale.html",
"title": "DAPO: An Open-Source LLM Reinforcement Learning System at Scale",
"raw_excerpt": "论文的核心问题是大规模强化学习(RL)在大型语言模型(LLM)中的实际算法和关键技巧仍被隐藏,导致社区难以重现现有推理模型的训练结果,如OpenAI o1...",
"extracted_excerpt": "论文的核心问题是大规模强化学习(RL)在大型语言模型(LLM)中的实际算法和关键技巧仍被隐藏,导致社区难以重现现有推理模型的训练结果,如OpenAI o1...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2022/04/01/training-a-helpful-and-harmless-assistant-with-reinforcement-learning-from-human-feedback.html",
"title": "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback",
"raw_excerpt": "本文旨在通过偏好建模(Preference Modeling, PM)和基于人类反馈的强化学习(Reinforcement Learning from H...",
"extracted_excerpt": "本文旨在通过偏好建模(Preference Modeling, PM)和基于人类反馈的强化学习(Reinforcement Learning from H...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2013/12/01/playing-atari-with-deep-reinforcement-learning.html",
"title": "Playing Atari with Deep Reinforcement Learning",
"raw_excerpt": "本文提出首个能够直接从高维感官输入(如原始像素)中,通过强化学习成功学习控制策略的深度学习模型。",
"extracted_excerpt": "本文提出首个能够直接从高维感官输入(如原始像素)中,通过强化学习成功学习控制策略的深度学习模型。",
"excerpt_length": 48,
"extracted_length": 48,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2015/07/01/massively-parallel-methods-for-deep-reinforcement-learning.html",
"title": "Massively Parallel Methods for Deep Reinforcement Learning",
"raw_excerpt": "本文介绍了一种首次为深度强化学习设计的大规模分布式架构。由于先前最先进的深度强化学习算法,如深度Q网络(DQN),仅在单机架构上应用,导致训练时间过长(例...",
"extracted_excerpt": "本文介绍了一种首次为深度强化学习设计的大规模分布式架构。由于先前最先进的深度强化学习算法,如深度Q网络(DQN),仅在单机架构上应用,导致训练时间过长(例...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2025/04/01/exploring-data-scaling-trends-and-effects-in-reinforcement-learning-from-human-feedback.html",
"title": "Exploring Data Scaling Trends and Effects in Reinforcement Learning from Human Feedback",
"raw_excerpt": "本文探讨了人类反馈强化学习(RLHF)中数据缩放的趋势和影响,重点关注当前阻碍RLHF性能缩放的数据驱动瓶颈,特别是奖励黑客攻击和响应多样性下降的问题。研...",
"extracted_excerpt": "本文探讨了人类反馈强化学习(RLHF)中数据缩放的趋势和影响,重点关注当前阻碍RLHF性能缩放的数据驱动瓶颈,特别是奖励黑客攻击和响应多样性下降的问题。研...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2017/08/01/proximal-policy-optimization-algorithms.html",
"title": "Proximal Policy Optimization Algorithms",
"raw_excerpt": "核心问题:现有的主流强化学习方法存在各自的缺陷。深度Q学习(Deep Q-learning)在许多简单问题上会失败;“香草”策略梯度方法(vanilla ...",
"extracted_excerpt": "核心问题:现有的主流强化学习方法存在各自的缺陷。深度Q学习(Deep Q-learning)在许多简单问题上会失败;“香草”策略梯度方法(vanilla ...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2024/02/01/deepseekmath-pushing-the-limits-of-mathematical-reasoning-in-open-language-models.html",
"title": "DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models",
"raw_excerpt": "本文介绍了 DeepSeekMath,一个在数学推理方面取得显著性能的领域特定语言模型。其核心贡献可分为可扩展的数学预训练和对强化学习的探索与分析两个方面。",
"extracted_excerpt": "本文介绍了 DeepSeekMath,一个在数学推理方面取得显著性能的领域特定语言模型。其核心贡献可分为可扩展的数学预训练和对强化学习的探索与分析两个方面。",
"excerpt_length": 78,
"extracted_length": 78,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2020/09/01/learning-to-summarize-from-human-feedback.html",
"title": "Learning to summarize from human feedback",
"raw_excerpt": "本文的核心问题是,当前用于微调大型语言模型的监督学习目标(即最大化人类编写文本的对数概率)与我们真正关心的目标(即生成由人类判断的高质量输出)之间存在偏差...",
"extracted_excerpt": "本文的核心问题是,当前用于微调大型语言模型的监督学习目标(即最大化人类编写文本的对数概率)与我们真正关心的目标(即生成由人类判断的高质量输出)之间存在偏差...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2024/05/01/nemo-aligner-scalable-toolkit-for-efficient-model-alignment.html",
"title": "NeMo-Aligner: Scalable Toolkit for Efficient Model Alignment",
"raw_excerpt": "本文介绍了NeMo-Aligner,一个旨在高效地将大型语言模型(LLMs)与人类价值观和偏好对齐的工具包。对齐是使LLMs变得有用和安全的关键步骤,但为...",
"extracted_excerpt": "本文介绍了NeMo-Aligner,一个旨在高效地将大型语言模型(LLMs)与人类价值观和偏好对齐的工具包。对齐是使LLMs变得有用和安全的关键步骤,但为...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2025/05/01/prorl-prolonged-reinforcement-learning-expands-reasoning-boundaries-in-large-language-models.html",
"title": "ProRL: Prolonged Reinforcement Learning Expands Reasoning Boundaries in Large Language Models",
"raw_excerpt": "本文探讨了强化学习(RL)在提升大型语言模型(LLM)推理能力方面的核心问题:RL究竟是解锁了模型新的推理能力,还是仅仅优化了基础模型中已存在的解决方案的...",
"extracted_excerpt": "本文探讨了强化学习(RL)在提升大型语言模型(LLM)推理能力方面的核心问题:RL究竟是解锁了模型新的推理能力,还是仅仅优化了基础模型中已存在的解决方案的...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/rl/2025/07/01/group-sequence-policy-optimization.html",
"title": "Group Sequence Policy Optimization",
"raw_excerpt": "本文针对现有强化学习(RL)算法在训练大型语言模型时(特别是GRPO算法)存在的严重稳定性问题,提出了组序列策略优化(Group Sequence Pol...",
"extracted_excerpt": "本文针对现有强化学习(RL)算法在训练大型语言模型时(特别是GRPO算法)存在的严重稳定性问题,提出了组序列策略优化(Group Sequence Pol...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/pretrain_sft/2025/03/01/samplemix-a-sample-wise-pre-training-data-mixing-strategey-by-coordinating-data-quality-and-diversity.html",
"title": "SampleMix: A Sample-wise Pre-training Data Mixing Strategey by Coordinating Data Quality and Diversity",
"raw_excerpt": "本文研究了大型语言模型(LLM)预训练数据的混合策略问题。现有的数据混合方法通常遵循一种“领域为单位”(domain-wise)的自顶向下方法,即先确定各...",
"extracted_excerpt": "本文研究了大型语言模型(LLM)预训练数据的混合策略问题。现有的数据混合方法通常遵循一种“领域为单位”(domain-wise)的自顶向下方法,即先确定各...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/pretrain_sft/2021/04/01/roformer-enhanced-transformer-with-rotary-position-embedding.html",
"title": "ROFORMER: ENHANCED TRANSFORMER WITH ROTARY POSITION EMBEDDING",
"raw_excerpt": "本文研究了将位置信息集成到基于Transformer的语言模型中的多种方法,并提出了一种名为旋转位置嵌入(Rotary Position Embeddin...",
"extracted_excerpt": "本文研究了将位置信息集成到基于Transformer的语言模型中的多种方法,并提出了一种名为旋转位置嵌入(Rotary Position Embeddin...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/pretrain_sft/2024/12/01/unveiling-the-secret-recipe-a-guide-for-supervised-fine-tuning-small-llms.html",
"title": "UNVEILING THE SECRET RECIPE: A GUIDE FOR SUPERVISED FINE-TUNING SMALL LLMS",
"raw_excerpt": "本文旨在弥合大型工业研究实验室与资源有限的个人开发者及小型组织之间在微调大型语言模型(LLM)方面的差距。研究的核心问题是:如何有效地在涵盖多样化知识和技...",
"extracted_excerpt": "本文旨在弥合大型工业研究实验室与资源有限的个人开发者及小型组织之间在微调大型语言模型(LLM)方面的差距。研究的核心问题是:如何有效地在涵盖多样化知识和技...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/pretrain_sft/2022/03/01/tensor-programs-v-tuning-large-neural-networks-via-zero-shot-hyperparameter-transfer.html",
"title": "Tensor Programs V: Tuning Large Neural Networks via Zero-Shot\n Hyperparameter Transfer",
"raw_excerpt": "核心问题:深度学习中的超参数(HP)调优是一个昂贵的过程,对于具有数十亿参数的神经网络(NNs)而言尤其如此,因为最先进的网络训练成本极高,导致调优变得不...",
"extracted_excerpt": "核心问题:深度学习中的超参数(HP)调优是一个昂贵的过程,对于具有数十亿参数的神经网络(NNs)而言尤其如此,因为最先进的网络训练成本极高,导致调优变得不...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/pretrain_sft/2022/03/01/deepnet-scaling-transformers-to-1000-layers.html",
"title": "DeepNet: Scaling Transformers to 1,000 Layers",
"raw_excerpt": "核心问题:尽管Transformer模型参数量已达万亿级别,但其深度受限于训练不稳定性,通常不超过数百层。现有的稳定化方法,如Pre-LN,虽然提高了稳定...",
"extracted_excerpt": "核心问题:尽管Transformer模型参数量已达万亿级别,但其深度受限于训练不稳定性,通常不超过数百层。现有的稳定化方法,如Pre-LN,虽然提高了稳定...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/pretrain_sft/2024/08/01/inference-scaling-laws-an-empirical-analysis-of-compute-optimal-inference-for-llm-problem-solving.html",
"title": "INFERENCE SCALING LAWS: AN EMPIRICAL ANALYSIS OF COMPUTE-OPTIMAL INFERENCE FOR LLM PROBLEM-SOLVING",
"raw_excerpt": "本文研究了大型语言模型(LLM)的推理缩放定律(或称测试时缩放定律)和计算最优推理,重点探讨了模型大小与不同推理策略下生成额外token之间的权衡。",
"extracted_excerpt": "本文研究了大型语言模型(LLM)的推理缩放定律(或称测试时缩放定律)和计算最优推理,重点探讨了模型大小与不同推理策略下生成额外token之间的权衡。",
"excerpt_length": 74,
"extracted_length": 74,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/pretrain_sft/2013/12/01/what-makes-good-data-for-alignment-a-comprehensive-study-of-automatic-data-selection-in-instruction-tuning.html",
"title": "WHAT MAKES GOOD DATA FOR ALIGNMENT? A COMPREHENSIVE STUDY OF AUTOMATIC DATA SELECTION IN INSTRUCTION TUNING",
"raw_excerpt": "本文旨在系统性地探究指令微调中“好数据”的特征,并基于此提出一种自动、高效的数据选择方法,以提升大型语言模型(LLM)对齐的效率。",
"extracted_excerpt": "本文旨在系统性地探究指令微调中“好数据”的特征,并基于此提出一种自动、高效的数据选择方法,以提升大型语言模型(LLM)对齐的效率。",
"excerpt_length": 65,
"extracted_length": 65,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/pretrain_sft/2024/05/01/stacking-your-transformers-a-closer-look-at-model-growth-for-efficient-llm-pre-training.html",
"title": "Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training",
"raw_excerpt": "本文旨在解决大语言模型(LLM)预训练成本高昂的问题,通过一种名为“模型增长”的方法来加速训练过程。模型增长的核心思想是利用已训练好的小模型来初始化并加速...",
"extracted_excerpt": "本文旨在解决大语言模型(LLM)预训练成本高昂的问题,通过一种名为“模型增长”的方法来加速训练过程。模型增长的核心思想是利用已训练好的小模型来初始化并加速...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/models/2025/01/01/kimi-k15-scaling-reinforcement-learning-with-llms.html",
"title": "KIMI K1.5: SCALING REINFORCEMENT LEARNING WITH LLMS",
"raw_excerpt": "本文介绍了Kimi k1.5的训练方法,这是一个利用强化学习(RL)进行训练的多模态大语言模型(LLM),旨在探索超越现有静态数据集限制的持续扩展新路径。...",
"extracted_excerpt": "本文介绍了Kimi k1.5的训练方法,这是一个利用强化学习(RL)进行训练的多模态大语言模型(LLM),旨在探索超越现有静态数据集限制的持续扩展新路径。...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/models/2025/09/01/mimo-audio-audio-language-models-are-few-shot-learners.html",
"title": "MiMo-Audio: Audio Language Models are Few-Shot Learners",
"raw_excerpt": "本文的核心论点是,通过大规模的下一词元预测预训练,可以像文本领域的GPT-3一样,在音频领域实现任务的泛化能力。现有的音频语言模型通常依赖于针对特定音频任...",
"extracted_excerpt": "本文的核心论点是,通过大规模的下一词元预测预训练,可以像文本领域的GPT-3一样,在音频领域实现任务的泛化能力。现有的音频语言模型通常依赖于针对特定音频任...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/models/2025/06/01/dotsllm1-technical-report.html",
"title": "dots.llm1 Technical Report",
"raw_excerpt": "本文介绍了dots.llm1,这是一款大规模、高性价比的专家混合(Mixture of Experts, MoE)模型。该模型总参数量为1420亿,但每个...",
"extracted_excerpt": "本文介绍了dots.llm1,这是一款大规模、高性价比的专家混合(Mixture of Experts, MoE)模型。该模型总参数量为1420亿,但每个...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/models/2025/09/01/deepseek-v32-exp-boosting-long-context-efficiency-with-deepseek-sparse-attention.html",
"title": "DeepSeek-V3.2-Exp: Boosting Long-Context Efficiency with DeepSeek Sparse Attention",
"raw_excerpt": "本文介绍了 DeepSeek-V3.2-Exp,这是一个实验性的稀疏注意力模型。该模型通过在 DeepSeek-V3.1-Terminus 的基础上进行持...",
"extracted_excerpt": "本文介绍了 DeepSeek-V3.2-Exp,这是一个实验性的稀疏注意力模型。该模型通过在 DeepSeek-V3.1-Terminus 的基础上进行持...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/models/2025/09/01/longcat-flash-thinking-technical-report.html",
"title": "LongCat-Flash-Thinking Technical Report",
"raw_excerpt": "本文介绍了 LongCat-Flash-Thinking,一个高效的、拥有 5600 亿参数的开源混合专家(MoE)推理模型。该模型的先进能力是通过一个精...",
"extracted_excerpt": "本文介绍了 LongCat-Flash-Thinking,一个高效的、拥有 5600 亿参数的开源混合专家(MoE)推理模型。该模型的先进能力是通过一个精...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/models/2024/03/01/gemini-15-unlocking-multimodal-understanding-across-millions-of-tokens-of-context.html",
"title": "Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context",
"raw_excerpt": "本文介绍了Gemini 1.5模型系列,这是一个新一代的高计算效率多模态模型家族。其核心研究目标是突破现有大语言模型(LLM)在上下文长度上的限制,实现对...",
"extracted_excerpt": "本文介绍了Gemini 1.5模型系列,这是一个新一代的高计算效率多模态模型家族。其核心研究目标是突破现有大语言模型(LLM)在上下文长度上的限制,实现对...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/models/2025/05/01/mimo-unlocking-the-reasoning-potential-of-language-model-from-pretraining-to-posttraining.html",
"title": "MiMo: Unlocking the Reasoning Potential of Language Model – From Pretraining to Posttraining",
"raw_excerpt": "本文介绍了为推理任务而生的 7B 参数大规模语言模型 MiMo-7B,其在预训练和后训练阶段都进行了优化。研究的核心问题是如何通过预训练和后训练策略的协同...",
"extracted_excerpt": "本文介绍了为推理任务而生的 7B 参数大规模语言模型 MiMo-7B,其在预训练和后训练阶段都进行了优化。研究的核心问题是如何通过预训练和后训练策略的协同...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/models/2025/05/01/qwen3-technical-report.html",
"title": "Qwen3 Technical Report",
"raw_excerpt": "本文介绍了Qwen模型家族的最新版本——Qwen3。Qwen3系列大型语言模型(LLMs)旨在提升性能、效率和多语言能力,其核心研究目标是通过一系列创新设...",
"extracted_excerpt": "本文介绍了Qwen模型家族的最新版本——Qwen3。Qwen3系列大型语言模型(LLMs)旨在提升性能、效率和多语言能力,其核心研究目标是通过一系列创新设...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/models/2023/02/01/llama-open-and-efficient-foundation-language-models.html",
"title": "LLaMA: Open and Efficient Foundation Language Models",
"raw_excerpt": "本文介绍了一个名为LLaMA(Large Language Model Meta AI)的基础语言模型集合,其参数规模从70亿(7B)到650亿(65B)...",
"extracted_excerpt": "本文介绍了一个名为LLaMA(Large Language Model Meta AI)的基础语言模型集合,其参数规模从70亿(7B)到650亿(65B)...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/models/2025/08/01/kimi-k2-open-agentic-intelligence.html",
"title": "KIMI K2: OPEN AGENTIC INTELLIGENCE",
"raw_excerpt": "本文介绍了Kimi K2,一个拥有1.04万亿总参数和320亿激活参数的混合专家(MoE)大型语言模型,其设计旨在应对智能体能力的核心挑战并推动其边界。研...",
"extracted_excerpt": "本文介绍了Kimi K2,一个拥有1.04万亿总参数和320亿激活参数的混合专家(MoE)大型语言模型,其设计旨在应对智能体能力的核心挑战并推动其边界。研...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/models/2025/04/01/seed15-thinking-advancing-superb-reasoning-models-with-reinforcement-learning.html",
"title": "Seed1.5-Thinking: Advancing Superb Reasoning Models with Reinforcement Learning",
"raw_excerpt": "本文介绍了一款名为Seed1.5-Thinking的新型推理模型,该模型通过在响应前进行思考来提升在广泛基准测试中的性能。研究的核心目标是开发一个在推理和...",
"extracted_excerpt": "本文介绍了一款名为Seed1.5-Thinking的新型推理模型,该模型通过在响应前进行思考来提升在广泛基准测试中的性能。研究的核心目标是开发一个在推理和...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/models/2025/08/01/longcat-flash-technical-report.html",
"title": "LongCat-Flash Technical Report",
"raw_excerpt": "LongCat-Flash 旨在沿两个协同方向推进语言模型的前沿:计算效率和智能体能力。本文的贡献涵盖了效率和智能体智能两个方面:",
"extracted_excerpt": "LongCat-Flash 旨在沿两个协同方向推进语言模型的前沿:计算效率和智能体能力。本文的贡献涵盖了效率和智能体智能两个方面:",
"excerpt_length": 65,
"extracted_length": 65,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/models/2024/01/01/deepseek-coder-when-the-large-language-model-meets-programming-the-rise-of-code-intelligence.html",
"title": "DeepSeek-Coder: When the Large Language Model Meets Programming - The Rise of Code Intelligence",
"raw_excerpt": "本文旨在解决开源代码模型与闭源模型之间的性能差距。尽管大型语言模型(LLMs)极大地改变了软件开发中的代码智能,但强大的闭源模型由于其专有性,限制了广泛的...",
"extracted_excerpt": "本文旨在解决开源代码模型与闭源模型之间的性能差距。尽管大型语言模型(LLMs)极大地改变了软件开发中的代码智能,但强大的闭源模型由于其专有性,限制了广泛的...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/models/2023/07/01/llama-2-open-foundation-and-fine-tuned-chat-models.html",
"title": "Llama 2: Open Foundation and Fine-Tuned Chat Models",
"raw_excerpt": "本文旨在解决现有开源大语言模型(LLMs)虽在预训练性能上可与闭源模型媲美,但因缺乏与人类偏好对齐的深度微调,而无法成为ChatGPT、BARD等成熟“产...",
"extracted_excerpt": "本文旨在解决现有开源大语言模型(LLMs)虽在预训练性能上可与闭源模型媲美,但因缺乏与人类偏好对齐的深度微调,而无法成为ChatGPT、BARD等成熟“产...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/models/2024/12/01/deepseek-v3-technical-report.html",
"title": "DeepSeek-V3 Technical Report",
"raw_excerpt": "DeepSeek-V3是一个拥有6710亿总参数的强混合专家(MoE)语言模型,每个token激活370亿参数。为了实现经济高效的训练和推理,该模型继承并...",
"extracted_excerpt": "DeepSeek-V3是一个拥有6710亿总参数的强混合专家(MoE)语言模型,每个token激活370亿参数。为了实现经济高效的训练和推理,该模型继承并...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/models/2025/08/01/glm-45-agentic-reasoning-and-coding-arc-foundation-models.html",
"title": "GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models",
"raw_excerpt": "本文介绍了两个新的大型语言模型:GLM-4.5 和 GLM-4.5-Air,旨在统一代理(Agentic)、推理(Reasoning)和编码(Coding...",
"extracted_excerpt": "本文介绍了两个新的大型语言模型:GLM-4.5 和 GLM-4.5-Air,旨在统一代理(Agentic)、推理(Reasoning)和编码(Coding...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/algorithm/models/2024/07/01/the-llama-3-herd-of-models.html",
"title": "The Llama 3 Herd of Models",
"raw_excerpt": "本文介绍了一套名为 Llama 3 的新型基础模型。Llama 3 模型家族原生支持多语言、编码、推理和工具使用。其中最大的模型是一个拥有4050亿(40...",
"extracted_excerpt": "本文介绍了一套名为 Llama 3 的新型基础模型。Llama 3 模型家族原生支持多语言、编码、推理和工具使用。其中最大的模型是一个拥有4050亿(40...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/attention/2025/05/01/sageattention3-microscaling-fp4-attention-for-inference-and-an-exploration-of-8-bit-training.html",
"title": "SageAttention3: Microscaling FP4 Attention for Inference and An Exploration of 8-bit Training",
"raw_excerpt": "研究动机: Attention机制的效率对于生成模型至关重要,特别是其二次时间复杂度在处理长序列时成为瓶颈。量化是利用GPU中低比特张量核心(Tensor...",
"extracted_excerpt": "研究动机: Attention机制的效率对于生成模型至关重要,特别是其二次时间复杂度在处理长序列时成为瓶颈。量化是利用GPU中低比特张量核心(Tensor...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/attention/2023/09/01/deepspeed-ulysses-system-optimizations-for-enabling-training-of-extreme-long-sequence-transformer-models.html",
"title": "DEEPSPEED ULYSSES: SYSTEM OPTIMIZATIONS FOR ENABLING TRAINING OF EXTREME LONG SEQUENCE TRANSFORMER MODELS",
"raw_excerpt": "本文针对大规模语言模型(LLM)在长序列训练中面临的系统挑战,介绍了一种名为 DeepSpeed-Ulysses 的新方法。",
"extracted_excerpt": "本文针对大规模语言模型(LLM)在长序列训练中面临的系统挑战,介绍了一种名为 DeepSpeed-Ulysses 的新方法。",
"excerpt_length": 62,
"extracted_length": 62,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/attention/2019/11/01/fast-transformer-decoding-one-write-head-is-all-you-need.html",
"title": "Fast Transformer Decoding: One Write-Head is All You Need",
"raw_excerpt": "核心问题与研究目标:Transformer 神经网络序列模型在增量推理(incremental inference)时的速度是一个主要挑战。在现代计算硬件...",
"extracted_excerpt": "核心问题与研究目标:Transformer 神经网络序列模型在增量推理(incremental inference)时的速度是一个主要挑战。在现代计算硬件...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/attention/2024/04/01/leave-no-context-behind-efficient-infinite-context-transformers-with-infini-attention.html",
"title": "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention",
"raw_excerpt": "本文旨在解决 Transformer 及基于 Transformer 的大型语言模型(LLMs)因注意力机制的二次复杂度而在处理长序列时面临的内存和计算瓶...",
"extracted_excerpt": "本文旨在解决 Transformer 及基于 Transformer 的大型语言模型(LLMs)因注意力机制的二次复杂度而在处理长序列时面临的内存和计算瓶...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/attention/2020/02/01/low-rank-bottleneck-in-multi-head-attention-models.html",
"title": "Low-Rank Bottleneck in Multi-head Attention Models",
"raw_excerpt": "本文旨在识别并解决当前多头注意力模型中一个关键的性能瓶颈。",
"extracted_excerpt": "本文旨在识别并解决当前多头注意力模型中一个关键的性能瓶颈。",
"excerpt_length": 29,
"extracted_length": 29,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/attention/2023/10/01/ring-attention-with-blockwise-transformers-for-near-infinite-context.html",
"title": "Ring Attention with Blockwise Transformers for Near-Infinite Context",
"raw_excerpt": "核心问题: Transformer架构在处理长序列时面临严峻的内存挑战。其自注意力机制的内存成本与序列长度成二次方关系,导致难以扩展到长序列输入。此外,即...",
"extracted_excerpt": "核心问题: Transformer架构在处理长序列时面临严峻的内存挑战。其自注意力机制的内存成本与序列长度成二次方关系,导致难以扩展到长序列输入。此外,即...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/attention/2025/09/01/sla-beyond-sparsity-in-diffusion-transformers-via-fine-tunable-sparselinear-attention.html",
"title": "SLA: BEYOND SPARSITY IN DIFFUSION TRANSFORMERS VIA FINE-TUNABLE SPARSE–LINEAR ATTENTION",
"raw_excerpt": "核心问题: 在扩散 Transformer(DiT)模型中,尤其是在视频生成领域,由于序列长度很长,注意力机制的二次方复杂度成为主要的延迟瓶颈。",
"extracted_excerpt": "核心问题: 在扩散 Transformer(DiT)模型中,尤其是在视频生成领域,由于序列长度很长,注意力机制的二次方复杂度成为主要的延迟瓶颈。",
"excerpt_length": 72,
"extracted_length": 72,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/attention/2023/07/01/flashattention-2-faster-attention-with-better-parallelism-and-work-partitioning.html",
"title": "FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning",
"raw_excerpt": "本文的核心问题是解决 Transformer 模型在处理长序列时的性能瓶颈。注意力层的运行时间和内存占用随序列长度呈二次方增长,这限制了模型处理长文档、高...",
"extracted_excerpt": "本文的核心问题是解决 Transformer 模型在处理长序列时的性能瓶颈。注意力层的运行时间和内存占用随序列长度呈二次方增长,这限制了模型处理长文档、高...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/attention/2025/02/01/tree-attention-topology-aware-decoding-for-long-context-attention-on-gpu-clusters.html",
"title": "TREE ATTENTION: TOPOLOGY-AWARE DECODING FOR LONG-CONTEXT ATTENTION ON GPU CLUSTERS",
"raw_excerpt": "本文旨在解决现代 Transformer 架构核心操作——自注意力机制的计算瓶颈问题。自注意力机制的计算复杂度随序列长度呈二次方增长,这使得处理长上下文的...",
"extracted_excerpt": "本文旨在解决现代 Transformer 架构核心操作——自注意力机制的计算瓶颈问题。自注意力机制的计算复杂度随序列长度呈二次方增长,这使得处理长上下文的...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/attention/2025/05/01/flashmla-etap-efficient-transpose-attention-pipeline-for-accelerating-mla-inference-on-nvidia-h20-gpus.html",
"title": "FlashMLA-ETAP: Efficient Transpose Attention Pipeline for Accelerating MLA Inference on NVIDIA H20 GPUs",
"raw_excerpt": "核心问题: Transformer架构中的注意力机制(如多头注意力MHA和多头潜在注意力MLA)具有与序列长度平方成正比的计算复杂度,这在处理长上下文任务...",
"extracted_excerpt": "核心问题: Transformer架构中的注意力机制(如多头注意力MHA和多头潜在注意力MLA)具有与序列长度平方成正比的计算复杂度,这在处理长上下文任务...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/attention/2024/07/01/flashattention-3-fast-and-accurate-attention-with-asynchrony-and-low-precision.html",
"title": "FlashAttention-3: Fast and Accurate Attention with Asynchrony and Low-precision",
"raw_excerpt": "本文的核心问题是,尽管FlashAttention-2通过最小化内存读写加速了GPU上的注意力计算,但在最新的H100 GPU上其利用率仅达到35%,未能...",
"extracted_excerpt": "本文的核心问题是,尽管FlashAttention-2通过最小化内存读写加速了GPU上的注意力计算,但在最新的H100 GPU上其利用率仅达到35%,未能...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/attention/2024/11/01/sageattention2-efficient-attention-with-thorough-outlier-smoothing-and-per-thread-int4-quantization.html",
"title": "SageAttention2: Efficient Attention with Thorough Outlier Smoothing and Per-thread INT4 Quantization",
"raw_excerpt": "本文针对现有注意力计算加速方法的不足,提出了 SageAttention2,旨在通过使用速度更快的4位矩阵乘法(INT4 Matmul)并结合多种精度增强...",
"extracted_excerpt": "本文针对现有注意力计算加速方法的不足,提出了 SageAttention2,旨在通过使用速度更快的4位矩阵乘法(INT4 Matmul)并结合多种精度增强...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/attention/2024/07/01/usp-a-unified-sequence-parallelism-approach-for-long-context-generative-ai.html",
"title": "USP: A Unified Sequence Parallelism Approach for Long Context Generative AI",
"raw_excerpt": "本文旨在解决生成式AI模型中日益增长的长上下文处理需求所带来的挑战。随着Claude、GPT-4、Gemini 1.5 Pro和Sora等模型将上下文长度...",
"extracted_excerpt": "本文旨在解决生成式AI模型中日益增长的长上下文处理需求所带来的挑战。随着Claude、GPT-4、Gemini 1.5 Pro和Sora等模型将上下文长度...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/attention/2023/12/01/gqa-training-generalized-multi-query-transformer-models-from-multi-head-checkpoints.html",
"title": "GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints",
"raw_excerpt": "自回归解码器的推理过程是Transformer模型的一个严重瓶颈,主要因为在每个解码步骤都需要加载解码器权重以及所有的注意力键(keys)和值(value...",
"extracted_excerpt": "自回归解码器的推理过程是Transformer模型的一个严重瓶颈,主要因为在每个解码步骤都需要加载解码器权重以及所有的注意力键(keys)和值(value...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/attention/2024/10/01/sageattention-accurate-8-bit-attention-for-plug-and-play-inference-acceleration.html",
"title": "SAGEATTENTION: ACCURATE 8-BIT ATTENTION FOR PLUG-AND-PLAY INFERENCE ACCELERATION",
"raw_excerpt": "本文旨在解决Transformer模型中注意力(Attention)机制的计算效率问题。随着序列长度的增加,具有 $O(N^2)$ 计算复杂度的注意力机制...",
"extracted_excerpt": "本文旨在解决Transformer模型中注意力(Attention)机制的计算效率问题。随着序列长度的增加,具有 $O(N^2)$ 计算复杂度的注意力机制...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/attention/2025/08/01/mixture-of-contexts-for-long-video-generation.html",
"title": "Mixture of Contexts for Long Video Generation",
"raw_excerpt": "核心问题:长视频生成本质上是一个长上下文记忆问题。模型必须在长时间范围内保留和检索显著事件,而不会出现内容崩塌或漂移。然而,将扩散变换器(Diffusio...",
"extracted_excerpt": "核心问题:长视频生成本质上是一个长上下文记忆问题。模型必须在长时间范围内保留和检索显著事件,而不会出现内容崩塌或漂移。然而,将扩散变换器(Diffusio...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/attention/2022/07/01/flashattention-fast-and-memory-efficient-exact-attention-with-io-awareness.html",
"title": "FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness",
"raw_excerpt": "本文旨在解决Transformer模型在处理长序列时速度慢、内存消耗大的核心问题,该问题源于自注意力机制的时间和内存复杂度与序列长度成二次方关系。许多现有...",
"extracted_excerpt": "本文旨在解决Transformer模型在处理长序列时速度慢、内存消耗大的核心问题,该问题源于自注意力机制的时间和内存复杂度与序列长度成二次方关系。许多现有...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/train/2023/04/01/pytorch-fsdp-experiences-on-scaling-fully-sharded-data-parallel.html",
"title": "PyTorch FSDP: Experiences on Scaling Fully Sharded Data Parallel",
"raw_excerpt": "本文介绍了PyTorch的完全分片数据并行(Fully Sharded Data Parallel, FSDP),这是一个用于大规模模型训练的工业级解决方...",
"extracted_excerpt": "本文介绍了PyTorch的完全分片数据并行(Fully Sharded Data Parallel, FSDP),这是一个用于大规模模型训练的工业级解决方...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/train/2025/08/01/optimus-accelerating-large-scale-multi-modal-llm-training-by-bubble-exploitation.html",
"title": "Optimus: Accelerating Large-Scale Multi-Modal LLM Training by Bubble Exploitation",
"raw_excerpt": "本文旨在解决训练大规模多模态语言模型(MLLM)时的效率低下问题。现有系统在训练MLLM时,由于异构的模态模型和3D并行中复杂的数据依赖性,会产生大量的G...",
"extracted_excerpt": "本文旨在解决训练大规模多模态语言模型(MLLM)时的效率低下问题。现有系统在训练MLLM时,由于异构的模态模型和3D并行中复杂的数据依赖性,会产生大量的G...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/train/2024/09/01/domino-eliminating-communication-in-llm-training-via-generic-tensor-slicing-and-overlapping.html",
"title": "Domino: Eliminating Communication in LLM Training via Generic Tensor Slicing and Overlapping",
"raw_excerpt": "生成式AI的最新进展在各种领域启用了新的应用场景,例如聊天机器人【[53], OpenAI. ChatGPT. https://chatgpt.com/,...",
"extracted_excerpt": "生成式AI的最新进展在各种领域启用了新的应用场景,例如聊天机器人【[53], OpenAI. ChatGPT. https://chatgpt.com/,...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/train/2021/07/01/chimera-efficiently-training-large-scale-neural-networks-with-bidirectional-pipelines.html",
"title": "Chimera: Efficiently Training Large-Scale Neural Networks with Bidirectional Pipelines",
"raw_excerpt": "核心问题:训练大规模深度学习模型极具挑战性。随着模型规模的增长(例如,GPT-3拥有1750亿参数),简单的并行化方案(如数据并行)已不再适用,因为模型无...",
"extracted_excerpt": "核心问题:训练大规模深度学习模型极具挑战性。随着模型规模的增长(例如,GPT-3拥有1750亿参数),简单的并行化方案(如数据并行)已不再适用,因为模型无...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/train/2024/06/01/universal-checkpointing-a-flexible-and-efficient-distributed-checkpointing-system-for-large-scale-dnn-training-with-reconfigurable-parallelism.html",
"title": "Universal Checkpointing: A Flexible and Efficient Distributed Checkpointing System for Large-Scale DNN Training with Reconfigurable Parallelism",
"raw_excerpt": "本文旨在解决大规模深度神经网络(DNN)训练中的一个核心问题:现有训练系统在通过检查点技术重新配置并行策略方面的支持非常有限。随着模型规模、数据量和序列长...",
"extracted_excerpt": "本文旨在解决大规模深度神经网络(DNN)训练中的一个核心问题:现有训练系统在通过检查点技术重新配置并行策略方面的支持非常有限。随着模型规模、数据量和序列长...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},
{
"url": "/papers/llm/engineering/train/2023/10/01/fault-tolerant-hybrid-parallel-training-at-scale-with-reliable-and-efficient-in-memory-checkpointing.html",
"title": "Fault-Tolerant Hybrid-Parallel Training at Scale with Reliable and Efficient In-memory Checkpointing",
"raw_excerpt": "核心问题:为了高效扩展大型模型(LM)的训练,研究人员从数据并行(DP)转向GPU集群上的混合并行(HP)。然而,这些集群频繁遭遇硬件和软件故障。现有的内...",
"extracted_excerpt": "核心问题:为了高效扩展大型模型(LM)的训练,研究人员从数据并行(DP)转向GPU集群上的混合并行(HP)。然而,这些集群频繁遭遇硬件和软件故障。现有的内...",
"excerpt_length": 80,
"extracted_length": 80,
"is_extracted": false
},