Questioning Go's range-over-func Proposal
2024-02-08
With the release of Go 1.22, the "range-over-func" experiment
has been deployed. It can be activated with the environment
variable GOEXPERIMENT
and is described at the "Rangefunc
Experiment" wiki article.
As I am curious about the development of Go, I read the wiki article. I'm not really sold on what was being pitched to me there. Except for one completely synthetic example, there was no demonstration of the new concept.
To make the proposal more tangible, let's come up with some real-world code using the proposed feature and evaluate it.
The "Backwards" Example
First, let me briefly show you the example I called "synthetic". This code is shown in the wiki article. It prints the elements of a slice in reverse order:
func main() { s := []string{"hello", "world"} for i, x := range Backward(s) { fmt.Println(i, x) } } func Backward[E any](s []E) func(func(int, E) bool) { return func(yield func(int, E) bool) { for i := len(s)-1; i >= 0; i-- { if !yield(i, s[i]) { return } } } }
Without range-over-func the same thing could look like this:
func main() { s := []string{"hello", "world"} for i := len(s)-1; i >= 0; i-- { fmt.Println(i, s[i]) } }
A Real-World Example
Of course, the code from the wiki article was only supposed to show how the interface looks and it's not fair to judge the whole proposal on this less-than-useful example.
Let's try to construct a more realistic use case. Towards the end,
the wiki article mentions some functions from the standard library
that could be replaced with better versions, if the range-over-func
proposal was accepted. The first function that is mentioned is
strings.Split
, so let's take a closer look at it. Here
is an example of how this function can be used:
func main() { animals := "dog fish cat" for i, animal := range strings.Split(animals, " ") { fmt.Println(i, animal) } }
According to the wiki article, this doesn't scale well, because
strings.Split
returns a slice of strings. I assume that the
problem is the additional memory that is used for this slice. So let's
see how range-over-func could improve the memory usage here:
func main() { animals := "dog fish cat" for i, animal := range Split(animals, " ") { fmt.Println(i, animal) } } func Split(s, sep string) func(func(int, string) bool) { return func(yield func(int, string) bool) { for i := 0; len(s) > 0; i++ { j := strings.Index(s, sep) if j == -1 { yield(i, s) return } if !yield(i, s[:j]) { return } s = s[j+len(sep):] } } }
Here, only one segment of the split is read and printed at a time. The same can be achieved without range-over-func and with less code:
func main() { animals := "dog fish cat" printSplit(animals, " ") } func printSplit(s, sep string) { for i := 0; len(s) > 0; i++ { j := strings.Index(s, sep) if j == -1 { fmt.Println(i, s) break } fmt.Println(i, s[:j]) s = s[j+len(sep):] } }
What Do We Gain?
So how is range-over-func an improvement here? I assume the answer
is: The range-over-func Split
function can be put into a
library and it's complexity hidden this way. This is not possible with
printSplit
, because its code has the print statements
hardwired inside.
But at What Cost?
The price we pay for this easy-to-use, yet scalable solution is the introduction of an interface, that I find extremely hard to understand. To implement a function, that can be used in a range loop, you must implement an (ideally generic) function that returns a function which accepts a function as a parameter. This makes my head spin. I don't really want to imagine debugging range-over-func code.
Is It Worth It?
I see range-over-func as a trade-off between easy to use, scalable library functions, which are so complicated inside, that many users won't distinguish them from magic and slightly more verbose and repetitive, but simple code.
In the past, Go always seemed to lean towards simplicity, even if it meant that the users had to write a little bit of boilerplate code here and there. To me, this was the fulfillment of Go's promise to stay simple, keep the languages features orthogonal and resist the feature creep that plagues so many other languages. It is what made me fall in love with Go.
Should we sacrifice simplicity for easier scalability? I've come to greatly appreciate Rob Pike's 5 Rules of Programming. Rule #2 sates "Measure. Don't tune for speed until you've measured, and even then don't unless one part of the code overwhelms the rest.". I'd like to apply this rule here: The range-over-func library functions would bring us potential performance gains or reduced memory use, but I think we won't really benefit from it 99% of the time. And in the 1%, that are actual, proven bottlenecks, we can still optimize and don't have to use a construct as complicated as range-over-func for it.
Please, my dear Go authors, don't take this step hastily. We could never go back. At least find, evaluate and show us some real-world examples where range-over-func is actually compelling, before complicating the language forever.