Go Team Cuts Heap Allocations Dramatically with New Stack Allocation Optimizations

<p><strong>February 27, 2026</strong> &mdash; The Go programming language team today announced a major performance improvement targeting one of the most persistent sources of slowdown in Go programs: heap allocations. By shifting more memory allocation to the stack, the new optimizations promise faster execution and reduced garbage collection overhead, especially for common patterns like dynamic slice growth.</p> <p>&ldquo;We&rsquo;re always looking for ways to make Go programs faster,&rdquo; said Keith Randall, a core Go team member. &ldquo;In the last two releases, we have concentrated on mitigating heap allocations because each heap allocation requires a fairly large chunk of code to satisfy it and adds load on the garbage collector. Even with recent enhancements like Green Tea, garbage collection still incurs substantial overhead.&rdquo;</p> <p>Heap allocations are known to be expensive compared to stack allocations, which are nearly free and automatically cleaned up when the function returns. The new work focuses on enabling more allocations on the stack, particularly for constant-sized slices and other structures that previously forced heap usage.</p> <h2><a id="the-problem"></a>The Problem: Appending to Slices</h2> <p>A common Go pattern is building a slice by appending elements from a channel. Each append triggers a series of heap allocations as the slice grows, especially in the early iterations when the backing store is small.</p><figure style="margin:20px 0"><img src="https://go.dev/images/google-white.png" alt="Go Team Cuts Heap Allocations Dramatically with New Stack Allocation Optimizations" style="width:100%;height:auto;border-radius:8px" loading="lazy"><figcaption style="font-size:12px;color:#666;margin-top:5px">Source: blog.golang.org</figcaption></figure> <p>&ldquo;On the first loop iteration, there is no backing store for the slice, so append allocates one of size 1. On the second iteration, it&rsquo;s full, so another allocation of size 2 follows, and the old store becomes garbage,&rdquo; Randall explained. The pattern continues with allocations of sizes 4, 8, and so on &mdash; a startup phase that produces many small allocations and lots of garbage before the slice reaches a stable size.</p> <p>While the slice doubling algorithm eventually reduces allocations per append, the early overhead can be significant for code that never sees large slices. &ldquo;This startup phase may be all you ever encounter if your slice doesn&rsquo;t grow large,&rdquo; Randall noted. &ldquo;We&rsquo;ve been working on ways to allocate more on the stack instead of the heap to eliminate that waste.&rdquo;</p> <h2><a id="the-solution"></a>The Solution: Stack Allocation for Constant-Sized Slices</h2> <p>The Go team&rsquo;s new optimizations allow the compiler to allocate slice backing stores on the stack when the maximum size can be determined at compile time &mdash; or when the slice is used in a context where stack allocation is safe. This eliminates the need for repeated heap allocations and reduces pressure on the garbage collector.</p> <p>&ldquo;Stack allocations are considerably cheaper to perform, sometimes completely free,&rdquo; Randall said. &ldquo;Moreover, they present no load to the garbage collector, as stack allocations can be collected automatically together with the stack frame itself. Stack allocations also enable prompt reuse, which is very cache friendly.&rdquo;</p> <p>The result is that loops appending to slices can now run faster, with fewer pauses for memory management and better overall CPU cache behavior.</p> <h2><a id="background"></a>Background</h2> <p>Go&rsquo;s runtime has long used a garbage collector (GC) to manage heap memory. The Green Tea GC, introduced in recent releases, improved concurrent marking and reduced latency, but heap allocation itself still carries a runtime cost. Stack allocation, in contrast, involves simply advancing the stack pointer and is essentially free. Prior to these optimizations, the Go compiler could only stack-allocate objects of known, unchanging size (like fixed-sized arrays or structs). Dynamically sized slices almost always ended up on the heap.</p> <p>The new work builds on previous escape analysis improvements to detect more cases where heap allocation can be avoided. By recognizing that certain slice operations never escape the current function, the compiler can allocate backing stores on the stack.</p> <h2><a id="what-this-means"></a>What This Means for Developers</h2> <p>For Go developers, this optimization translates directly to faster code and less GC overhead in hot loops. Programs that frequently allocate small slices &mdash; for example, in middleware, parsers, or streaming applications &mdash; should see noticeable performance improvements.</p> <p>The changes are backward-compatible: existing code requires no modification to benefit. Developers can expect better throughput and lower memory usage in many common scenarios. The Go team recommends testing applications with the latest release to measure the impact, especially for workloads that are allocation-heavy.</p> <p>&ldquo;We believe these stack allocation improvements, combined with the Green Tea garbage collector, make Go an even more compelling choice for high-performance systems,&rdquo; Randall concluded.</p> <p>The optimizations are included in the Go 1.24 release and later versions. More details are available in the official <a href="https://go.dev/doc/go1.24">Go 1.24 release notes</a>.</p>