This is about an explicit argument of type "Context". I'm not a Go user, and at first I thought it was about something else: an implicit context variable that allows you to pass stuff deep down the call stack, without intermediate functions knowing about it.
React has "Context", SwiftUI has "@Environment", Emacs LISP has dynamic scope (so I heard). C# has AsyncLocal, Node.JS AsyncLocalStorage.
This is one of those ideas that at first seem really wrong (isn't it just a global variable in disguise?) but is actually very useful and can result in cleaner code with less globals or less superfluous function arguments. Imagine passing a logger like this, or feature flags. Or imagine setting "debug = True" before a function, and it applies to everything down the call stack (but not in other threads/async contexts).
Implicit context (properly integrated into the type system) is something I would consider in any new language. And it might also be a solution here (altough I would say such a "clever" and unusual feature would be against the goals of Go).
Passing the current user ID/tenant ID inside ctx has been super useful for us. We’re already using contexts for cancellation and graceful termination, so our application-layer functions already have them. Makes sense to just reuse them to store user and tenant IDs too (which we pull from access tokens in the transport layer).
We have DB sharding, so the DB layer needs to figure out which shard to choose. It does that by grabbing the user/tenant ID from the context and picking the right shard. Without contexts, this would be way harder—unless we wanted to break architecture rules, like exposing domain logic to DB details, and it would generally just clutter the code (passing tenant ID and shard IDs everywhere). Instead, we just use the "current request context" from the standard lib that can be passed around freely between modules, with various bits extracted from it as needed.
What’s the alternatives, though? Syntax sugar for retrieving variables from some sort of goroutine-local storage? Not good, we want things to be explicit. Force everyone to roll their own context-like interfaces, since a standard lib's implementation can't generalize well for all sitiations? That’s exactly why contexts we introduced—because nobody wanted to deal with mismatched custom implementations from different libs. Split it into separate "data context" and "cancellation context"? Okay, now we’re passing around two variables instead of one in every function call. DI to the rescue? You can hide userID/tenantID with clever dependency injection, and that's what we did before we introduced contexts to our codebase, but that resulted in allocations of individual dependency trees for each request (i.e. we embedded userID/tenantID inside request-specific service instances, to hide the current userID/tenantID, and other request details, from the domain layer to simplify domain logic), and it stressed the GC.
An alternative is to add all dependencies explicitly into function argument list or object fields, instead of using them implicitly from the context, without documentation and static typing. Including logger.
Main problems with passing dependencies in function argument lists:
1) it pollutes the code and makes refactoring harder (a small change in one place must be propagated to all call sites in the dependency tree which recursively accept user ID/tenant ID and similar info)
2) it violates various architectural principles, for example, from the point of view of our business logic, there's no such thing as "tenant ID", it's an implementation detail to more efficiently store data, and if we just rely on function argument lists, then we'd have to litter actual business logic with various infrastructure-specific references to tenant IDs and the like so that the underlying DB layer could figure out what to do.
Sure, it can be solved with constructor-based dependency injection (i.e. request-specific service instances are generated for each request, and we store user ID/tenant ID & friends as object fields of such request-scoped instances), and that's what we had before switching to contexts, but it resulted in excessive allocations and unnecessary memory pressure for our highload services. In complex enterprise code, those dependency trees can be quite large -- and we ended up allocating huge dependency trees for each request. With contexts, we now have a single application-scoped service dependency tree, and request-specific stuff just comes inside contexts.
Both problems can be solved by trying to group and reuse data cleverly, and eventually you'll get back to square one with an implementation which looks similar to ctx.Context but which is not reusable/composable.
>Including logger.
We don't store loggers in ctx, they aren't request-specific, so we just use constructor-based DI.
I believe this problem isn't solvable under our current paradigm of programming, which I call "working directly on plaintext, single-source-of-truth codebase".
Tenant ID, cancellations, loggers, error handling are all examples of cross-cutting concerns. Depending on what any given function does, and what you (the programmer) are interested in at a given moment, any of them could be critical information or pure noise. Ideally, you should not be seeing the things you don't care about, but our current paradigm forces us to spell out all of them, at all times, hurting readability and increasing complexity.
On the readability/"clean code", our most advanced languages are operating on a Pareto frontier. We have whole math fields being employed in service of packaging up common cross-cutting concerns, as to minimize the noise they generate. This is where all the magic monads come from, this is why you have to pay attention to infectious colors of your functions, etc. Different languages make slightly different trade-offs here, to make some concerns more readable, but since it's a Pareto frontier, it always makes some other aspects of code less comprehensible.
In my not so humble opinion, we won't progress beyond this point until we give up on the paradigm itself. We need to accept that, at any given moment, a programmer may need a different perspective on the code, and we need to build tools to allow writing code from those perspectives. What we now call source code should be relegated to the role of intermediary/object code - a single source of truth for the bowels of the compiler, but otherwise something we never touch directly.
Ultimately, the problem of "context" is a problem of perspective, and should be solved by tooling. That is, when reading or modifying code, I should be able to ignore any and all context I don't care about. One moment, I might care about the happy path, so I should be able to view and edit code with all error propagation removed; at another moment, I might care about how all the data travels through the module, in which case I want to see the same code with every single goddamn thing spelled out explicitly, in the fashion GP is arguing to be the default. Etc.
Plaintext is fine. Single source of truth is fine. A single all-encompassing view of everything in a source file is fine. But they're not fine all together, all the time.
Monads but more importantly MonadTransformers so you can program in a legible fashion.
However, there's a lot of manual labour to stuff everything into a monad, and then extract it and pattern match when your libraries don't match your choice of control flow monad(s)!
This is where I'd prefer if compilers could come in.
Imagine being in the bowels of a DB lib, and realising that the function you just write might be well positioned to terminate the TCP connection that it's using to talk to the database with. Oh no: now you have to update the signature and every single call-site for its parent, and its parent, and...
Instead, it would be neat if the compiler could treat things you deem cross-cutting as a graph traversal problem instead; call a cancelable method and all callers are automatically cancelable. Decisions about whether to spawn a cancelable subtree, to 'protect' some execution or set a deadline is then written on an opt-in basis per function; all functions compose. The compiler can visualise the tree of cancellation (or hierachical loggers, or OT spans, or actors, or green fibers, or ...) and it can enforce the global invariant that the entry-point captures SIGINT (or sets up logging, or sets up a tracer, or ...).
So imagine the infrastructure of a monad transformer, but available per-function on an opt-in basis. If you write your function to have a cleanup on cancellation, or write logs around any asynchronous barrier, the fiddly details of stuffing the monad is done by the compiler and optionally visualised and explained in the IDE. Your code doesn't have to opt-in, so you can make each function very clean.
Yes, there's plenty of space for automation and advanced support from tooling. Hell, not every perspective is best viewed as plaintext; in particular, anything that looks like a directed graph fundamentally cannot be well-represented in plaintext at all without repeating nodes, breaking the 1:1 correspondence between a token and a thing represented by that token.
Still, I believe the core insight here is that we need different perspectives at different times. Using your example, most of the time I probably don't care whether the code is cancellable or not. Any mention of it is distracting noise to me. But other times - perhaps next day, or perhaps just five minutes later, I suddenly need to know whether the code is cancellable, and perhaps I need to explicitly opt out of it somewhere. It's highly likely that in those cases, I may not care about things like error handling logic and passing around session identifiers, and I would like that to disappear in those moments, etc.
And hell, I might need an overview of the which code is or isn't protected, and that would be best served by showing me an interactive DAG of functions that I can zoom around and expand/collapse, so that's another kind of perspective. Etc.
EDIT:
And then there's my favorite example: the unending holy war of "few fat functions" vs. "lots of tiny functions". Despite the endless streams of Tweets and articles arguing for either, there is no right choice here - there's no right trade-off you can make here up front, and can never be, because which one is more readable depends strictly on why you're reading it. E.g. lots of tiny functions reduce duplication and can introduce a language you can use to effectively think about some code at a higher level - but if there's a thorny bug in there I'm trying to fix, I want all of that shit inlined into one, big function, that I can step through sequentially, following the actual execution order.
It is my firm belief that the ability to inline and uninline code on the fly, for yourself, personally, without affecting the actual execution or the work of other developers, is one of the most important missing piece in our current tooling, and making it happen is a good first step towards abandoning The Current Paradigm that is now suffocating us all.
Second one would be, along with inlining, the ability to just give variables and parameters fixed values when reading, and have those values be displayed and propagated through the code - effectively doing a partial simulation of code execution. Being able to do it ad hoc, temporarily, would be a huge aid in quickly understanding what some code does.
Promises are so incredibly close to being a representation of work.
The OS has such sophisticated tools for process management, but inside a process there are so many subprocesses going on, & it feels like we are flailing about with poorly managed process like things. (Everyone except Erlang.)
I love how close zx comes to touching the sky here. It's a typescript library for running processes, as a tagged template function returning a promise. *const hello = $`sleep 3; echo hello world`. But the promise isnt just a "a future value", it is A ProcessPromise for interacting with the promise.
I so wish promises were just a little better. It feels like such a bizarre tragedy to me the "a promise is a future value" not a thing unto itself won the day in es6 / es2015, destroyed the possibility of a promise being more; zx has run into a significant number of ergonomic annoyances because this small world dogma.
How cool it would be to see this go further. I'd love for the language to show what promises if any this promise is awaiting! I long for that dependency graph of subprocesses to start to show itself, not just at compile time but for the runtime to be able to actively observe and manage the subprocesses within it at runtime. We keep building workflow engines, build robust userland that manage their own subprocesses, user user lands, but the language itself seems so close & yet so far from letting the simple promise become more a process, and that seems like a sad shame.
> it violates various architectural principles, for example, from the point of view of our business logic, there's no such thing as "tenant ID"
I'm not sure I understand how hiding this changes anything. Could you just not pass "tenant ID" to doBusinessLogic function and pass it to saveToDatabase function?
That's exactly what what they're talking about, "tenantId" shouldn't be in the function signature for functions that aren't concerned with the tenant ID, such as business logic
I have a feeling, if Context disappears, you'll just see "Context" becoming a common struct that is passed around. In Python, unlike in C# and Java, the first param for a Class Method is usually the class instance itself, it is usually called "self" so I could see this becoming the norm in Go.
Under the hood, in both Java and C# the first argument of an instance method is the instance reference itself. After all, instance methods imply you have an instance to work with. Having to write 'this' by hand for such is how OOP was done before OOP languages became a thing.
I agree that adopting yet another pattern like this would be on brand for Go since it prizes taking its opinionated way of going about everything in a vintage kind of way over being practical and convenient.
As a newcomer to Go, a lot of their design decisions made a lot of sense when I realized that a lot of the design is based around this idea of "make it impossible to do something that could be dumb in some contexts".
For example, I hate that there's no inheritance. I wish I could create a ContainerImage object and then a RemoteContainerImage subclass and then QuayContainerImage and DockerhubContainerImage subclasses from those. However, being able to do inheritance, and especially multiple inheritance, can lead to awful, idiotic code that is needlessly complicated for no good reason.
At a previous job we had a script that would do operations on a local filesystem and then FTP items to a remote. I thought okay, the fundamental paradigms of FTP and SFTP-over-SSH via the paramiko module are basically identical so it should be a five minute job to patch it in, right?
Turns out this Python script, which, fundamentally, consisted of "take these files here and put them over there" was the most overdesigned piece of garbage I've ever seen. Clean, effective, and entirely functional code, but almost impossible to reason about. The code that did the actual work was six classes and multiple subclasses deep, but assumptions were baked in at every level. FTP-specific functionality which called a bunch of generic functionality which then called a bunch of FTP-specific functionality. In order to add SFTP support I would have had to effectively rewrite 80% of the code because even the generic stuff inherited from the FTP-specific stuff.
Eventually I gave up entirely and just left it alone; it was too important a part of a critical workflow to risk breaking and I never had the time or energy to put my frustration aside. Golang, for all its flaws, would have prevented a lot of that because a lot of the self-gratification this programmer spent his time on just wouldn't have been possible in Go for exactly this reason.
> As a newcomer to Go, a lot of their design decisions made a lot of sense when I realized that a lot of the design is based around this idea of "make it impossible to do something that could be dumb in some contexts".
You are indeed a newcomer :) God bless you to shoot feet only in dev environments.
It sounds like you may have some friction-studded history with Go. Any chance you can share your experience and perspective with using the language in your workloads?
> instead of using them implicitly from the context, without documentation and static typing
This is exactly what context is trying to avoid, and makes a tradeoff to that end. There's often intermediate business logic that shouldn't need to know anything about logging or metrics collection or the authn session. So we stuff things into an opaque object, whether it's a map, a dict, a magic DI container, "thread local storage", or whatever. It's a technique as old as programming.
There's nothing preventing you from providing well-typed and documented accessors for the things you put into a context. The context docs themselves recommend it and provide examples.
If you disagree that this is even a tradeoff worth making, then there's not really a discussion to be had about how to make it.
I disagree that it's a good approach. I think that parameters must be passed down always, as parameters. It allows compiler to detect unused parameters and it removes all implicitness.
It is verbose indeed and may be there should be programming language support to reduce that verbosity. Some languages support implicit parameters which proved to be problematic but may be there should be more iterations on that manner.
I consider context for passing down values to do more harm than good.
Other responses cover this well, but: the idea of having to change 20 functions to accept and propagate a `user` field just so that my database layer can shard based on userid is gross/awful.
...but doing the same with a context object is also gross/awful.
> an implicit context variable that allows you to pass stuff deep down the call stack, without intermediate functions knowing about it. [...] but is actually very useful and can result in cleaner code with less globals or less superfluous function arguments. [...] and it applies to everything down the call stack (but not in other threads/async contexts).
In my experience, these "thread-local" implicit contexts are a pain, for several reasons. First of all, they make refactoring harder: things like moving part of the computation to a thread pool, making part of the computation lazy, calling something which ends up modifying the implicit context behind your back without you knowing, etc. All of that means you have to manually save and restore the implicit context (inheritance doesn't help when the thread doing the work is not under your control). And for that, you have to know which implicit contexts exist (and how to save and restore them), which leads to my second point: they make the code harder to understand and debug. You have to know and understand each and every implicit context which might affect code you're calling (or code called by code you're calling, and so on). As proponents of another programming language would say, explicit is better than implicit.
They're basically dynamic scoping and it's both a very useful and powerful and very dangerous feature ... scheme's dynamic-wind model makes it more obvious when the particular form of magic is in use but isn't otherwise a lot different.
I would like to think that somebody better at type systems than me could provide a way to encode it into one that doesn't require typing out the dynamic names and types on every single function but can instead infer them based on what other functions are being called therein, but even assuming you had that I'm not sure how much of the (very real) issues you describe it would ameliorate.
I think for golang the answer is probably "no, that sort of powerful but dangerous feature is not what we're going for here" ... and yet when used sufficiently sparingly in other languages, I've found it incredibly helpful.
Basically you'd be asking for inferring a record type largely transparently. That's going to quickly explode to the most naive form because it's very hard to tell what could be called, especially in Go.
> React has "Context", SwiftUI has "@Environment", Emacs LISP has dynamic scope (so I heard). C# has AsyncLocal, Node.JS AsyncLocalStorage.
Emacs Lisp retains dynamic scope, but it's no longer a default for some time now, in line in other Lisps that remain in use. Dynamic scope is one of the greatest features in Lisp language family, and it's sad to see it's missing almost everywhere else - where, as you noted, it's being reinvented, but poorly, because it's not a first-class language feature.
On that note, the most common case of dynamic scope that almost everyone is familiar with, are environment variables. That's what they're for. Since most devs these days are not familiar with the idea of dynamic scope, this leads to a lot of peculiar practices and footguns the industry has around environment variables, that all stem from misunderstanding what they are for.
> This is one of those ideas that at first seem really wrong (isn't it just a global variable in disguise?)
It's not. It's about scoping a value to the call stack. Correctly used, rebinding a value to a dynamic variable should only be visible to the block doing the rebinding, and everything below it on the call stack at runtime.
> Implicit context (properly integrated into the type system) is something I would consider in any new language.
That's the problem I believe is currently unsolved, and possibly unsolvable in the overall programming paradigm we work under. One of the main practical benefits of dynamic scope is that place X can set up some value for place Z down on the call stack, while keeping everything in between X and Z oblivious of this fact. Now, this is trivial in dynamically typed language, but it goes against the principles behind statically-typed languages, which all hate implicit things.
(FWIW, I love types, but I also hate having to be explicit about irrelevant things. Since whether something is relevant or not isn't just a property of code, but also a property of a specific programmer at specific time and place, we're in a bit of a pickle. A shorter name for "stuff that's relevant or not depending on what you're doing at the moment" is cross-cutting concerns, and we still suck at managing them.)
> By default, the local bindings that Emacs creates are dynamic bindings. Such a binding has dynamic scope, meaning that any part of the program can potentially access the variable binding. It also has dynamic extent, meaning that the binding lasts only while the binding construct (such as the body of a let form) is being executed.
It’s also not really germane to the GP’s comment, as they’re just talking about dynamic scoping being available, which it will almost certainly always be (because it’s useful).
Sorry, you're right. It's not a cultural default anymore. I.e. Emacs Lisp got proper lexical scope some time ago, and since then, you're supposed to start every new .elisp file with:
;; -*- mode: emacs-lisp; lexical-binding: t; -*-
i.e. explicitly switching the interpreter/compiler to work in lexical binding mode.
> against the principles behind statically-typed languages, which all hate implicit things
But many statically typed languages allow throwing exceptions of any type. Contexts can be similar: "try catch" becomes "with value", "throw" becomes "get".
Yes, but then those languages usually implement only unchecked exception, as propagating error types up the call tree is seen as annoying. And then, because there are good reasons you may want to have typed error values (instead of just "any"), there is now pressure to use result types (aka. "expected", "maybe") instead - turning your return type Foo into Result<Foo, ErrorType>.
And all that it does is making you spell out the entire exception handling mechanism explicitly in your code - not just propagating the types up the call tree, but also making every function explicitly wrapping, unwrapping and branching on Result types. The latter is so annoying that people invent new syntax to hide it - like tacking ? at the end of the function, or whatever.
This becomes even worse than checked exception, but it's apparently what you're supposed to be doing these days, so ¯\_(ツ)_/¯.
We could make explicit effect (context, error) declarations for public functions and inferred for private functions. Explicit enumeration of possible exceptions is required for stable APIs anyway.
raku's take on gradual typing may be to your taste; i likewise prefer to leave irrelevant types out and use maximally-expressive types where it makes sense¹. i feel this is helped by the insistence on sigils because you then know the rough shape of things (and thus a minimal interface they implement: $scalar, @positional, %associative, &callable) even when you lack their specific types. in the same vein, dynamically scoped variables are indicated with the asterisk as a twigil (second level sigil).
@foo
is a list (well, it does Positional anyway), while
@*foo
is a different variable that is additionally dynamically scoped.
it's idiomatic to see
$*db
as a database handle to save passing it around explicitly, env vars are in
%*ENV
things like that. it's nice to have the additional explicit reminder whenever you're dealing with a dynamic variable in a way the language checks for you and yells at you for forgetting.
i would prefer to kick more of the complex things i do with types back to compile time, but a lot of static checks are there. more to the point, raku's type system is quite expressive at runtime (that's what you get when you copy common lisp's homework, after all) and helpful to move orthogonal concerns out into discrete manageable things that feel like types to use even if what they're doing is just a runtime branch that lives in the function signature. doing stuff via subset types or roles or coercion types means whatever you do plays nicely with polymorphic dispatch, method resolution order, pattern matching, what have you.
in fact, i just wrote a little entirely type level... thing? to clean up the body of an http handler that lifts everything into a role mix-in pipeline that runs from the database straight on through to live reloading of client-side elements. processing sensor readings for textual display, generating html, customizing where and when the client fetches the next live update, it's all just the same pipeline applying roles to the raw values from the db with the same infix operator (which just wraps a builtin non-associative operator to be left associative to free myself from all the parentheses).
not getting bogged down in managing types all the time frees you up to do things like this when it's most impactful, or at least that's what i tell myself whenever i step on a rake i should have remembered was there.
¹ or times where raku bubbles types up to the end-user, like the autogenerated help messages generated from the type signature of MAIN. i often write "useless" type declarations such as subset Domain-or-IP; which match anything² so that the help message says --host[=Domain-or-IP] instead of --host[=Str] or whatever
² well, except junctions, which i consider the current implementation of to be somewhat of a misstep since they're not fundamentally also a list plus a context. it's a whole thing. in any case, this acts at the level of the type hierarchy that you want anyway.
As a veteran of a large scala project (which was re-written in go, so I'm not unbiased), no. I was generally not happy.
This was scala 2, so implicit resolution lookup was a big chunk of the problem. There's nothing at the call site that tells you what is happening. But even when it wasn't hidden in a companion object somewhere, it was still difficult because every import change had to be scrutinized as it could cause large changes in behavior (this caused a non-zero number of production issues).
They work well for anything you would use environment variables for, but a chunk of the ecosystem likes to use them for handlers (the signature being a Functor generally), which was painful
> There's nothing at the call site that tells you what is happening.
A decent IDE highlights it at the call site.
It's definitely an abusable feature, but I find it very useful. In most other languages you end up having to have completely invisible parameters (e.g. database session bound to the thread) because it would be too cumbersome to pass them explicitly. In Scala you have a middle ground option between completely explicit and completely invisible.
I'm not sure what you consider a decent scala ide, but it was a problem with IntelliJ in several of our code bases, and I'd have to crawl the implicit resolution path.
I eventually opted to desugaring the scala completely, but we were already on the way out of scala by that point
> it was a problem with IntelliJ in several of our code bases
It shouldn't be, unless you were using macros (always officially experimental) or something - I was always primarily an Eclipse guy but IntelliJ worked well. Did you not get the green underline?
yeah that's what i thought, but maybe scala implicit param not being perfect will help finding a better linguistic trait (maybe they should enforce purity on these parameters)
IMO it is perfect, or at least better than anything else that's been found so far.
"Purity" means different things in different contexts. Ultimately you can give programmers more tools, but you can't get away from relying on their good judgement.
Not OP, but I briefly seconded to a team that used Scala at a big tech co and I was often frustrated by this feature specifically. They had a lot of code that consumed implicit parameters that I was trying to call from contexts they were not available.
Then again I guess it's better than a production outage because the thread-local you didn't know was a requirement wasn't available.
Algebraic effects and implicit arguments with explicit records are perfectly cromulent language features. GHC Haskell already has implicit arguments, and IIRC Scala uses them instead of a typeclass/trait system. The situation with extensible records in Haskell is more troublesome, but it’s more because of the endless bikeshedding of precisely how powerful they should be and because you can get almost all the way there with the existing type-system features except the ergonomics invariably suck.
It’s reasonable, I think, to want the dynamic scope but not the control-flow capabilities of monads, and in a language with mutability that might even be a better choice. (Then again, maybe not—SwiftUI is founded on Swift’s result builders, and those seem pretty much like monads by another name to me.) And I don’t think anybody likes writing the boilerplate you need to layer a dozen MonadReaders or -States on each other and then compose meaningful MonadMyLibraries out of them.
Finally, there’s the question of strong typing. You do want the whole thing to be strongly typed, but you don’t want the caller to write the entire dependency tree of the callee, or perhaps even to know it. Yet the caller may want to declare a type for itself. Allowing type signatures to be partly specified and partly inferred is not a common feature, and in general development seems to be backing away from large-scale type inference of this sort due to issues with compile errors. Not breaking ABI when the dependencies change (perhaps through default values of some sort) is a more difficult problem still.
(Note the last part can be repeated word for word for checked exceptions/typed errors. Those are also, as far as I’m aware, largely unsolved—and no, Rust doesn’t do much here except make the problem more apparent.)
Thread local storage means all async tasks (goroutines) must run in the same thread. This isn't how tasks are actually scheduled. A request can fan out, or contention can move parts of the computation between threads, which is why context exists.
Furthermore in Go threads are spun up at process start, not at request time, so thread-local has a leak risk or cleanup cost. Contexts are all releasable after their processing ends.
I've grown to be a huge fan of Go for servers and context is one reason. That said, I agree with a lot of the critique and would love to see an in-language solution, but thread-local ain't it.
A more correct term is "goroutine-local" storage, which Go _already_ has. It's used for pprof labels, they are even inherited when a new Goroutine is started.
seeing it is great. coming into a hairy monolith and having to plumb one variable through half a dozen layers to get to the creamy nougat later you actually wanted it in, is not. having to do that more than once it's why they invented the "magic" implicit context variable.
A good pitch for dynamic (context) variables is that they're not globals, they're like implicit arguments passed to all functions within the scope.
Personally I've used the (ugly) Python contextvars for:
- SQS message ID in to allow extending message visibility in any place in the code
- scoped logging context in logstruct (structlog killer in development :D)
I no longer remember what I used Clojure dynvars for, probably something dumb.
That being said, I don't believe that "active" objects like DB connection/session/transaction are good candidates for a context var value. Programmers need to learn to push side effects up the stack instead. Flask-SQLAlchemy is not correct here.
Even Flask's request object being context-scoped is a bad thing since it is usually not a problem to do all the dispatching in the view.
Yeah, I agree 100% with you. The thing with Golang is that it's supposed to be a very explicit language, so passing the context as an argument fits in with the rest of the language.
Nevertheless: just having it, be it implicit or explicit, beats having to implement it yourself.
> If you use ctx.Value in my (non-existent) company, you’re fired
This is such a bad take.
ctx.Value is incredibly useful for passing around context of api calls. We use it a lot, especially for logging such context values as locales, ids, client info, etc. We then use these context values when calling other services as headers so they gain the context around the original call too. Loggers in all services pluck out values from the context automatically when a log entry is created. It's a fantastic system and serves us well. e.g.
`ctx.Value` is an `any -> any` kv store that does not come with any documentation, type checking for which key and value should be available. It's quick and dirty, but in a large code base, it can be quite tricky to check if you are passing too many values down the chain, or too little, and handle the failure cases.
What if you just use a custom struct with all the fields you may need to be defined inside? Then at least all the field types are properly defined and documented. You can also use multiple custom "context" structs in different call paths, or even compose them if there are overlapping fields.
Because you should wrapp that in a type safe function. You should not use the context.GetValue() directly but use your own function, the context is just a transport mechanism.
> `ctx.Value` is an `any -> any` kv store that does not come with any documentation, type checking for which key and value should be available
The docs https://pkg.go.dev/context#Context suggest a way to make it type-safe (use an unexported key type and provide getter/setter). Seems fine to me.
> What if you just use a custom struct with all the fields you may need to be defined inside?
> `ctx.Value` is an `any -> any` kv store that does not come with any documentation, type checking for which key and value should be available.
On a similar note, this is also why I highly dislike struct tags. They're string magic that should be used sparingly, yet we've integrated them into data parsing, validation, type definitions and who knows what else just to avoid a bit of verbosity.
Most popular languages support annotations of one type or another, they let you do all that in a type safe way. It's Go that's decided to be different for difference sake, and produced a complete mess.
IMO Go is full of stuff like this where they do something different than most similar languages for questionable gains. `iota` instead of enums, implicit interfaces, full strings in imports (not talking about URLS here but them having string literal syntax), capitalization as visibility control come to mind immediately, and I'm sure there are others I'm forgetting. Not all of these are actively harmful, but for a language that touts "simplicity" as one of its core values, I've always found it odd how many different wheels Go felt the need to reinvent without any obvious benefit over the existing ones.
the second i tried writing go to solve a non-trivial problem the whole language collapsed in on itself. footguns upon footguns hand-waved away with "it's the go way!". i just don't understand. the "the go way" feels more like a mantra that discourages critical thinking about programming language design.
It did not have to be this way, this is a shortcoming of Go itself. Generic interfaces makes things a bit better, but Go designers chose that dumb typing at first place. The std lib is full of interface {} use iteself.
context itself is an after thought, because people were building thread unsafe leaky code on top of http request with no good way to easily scope variables that would scale concurrently.
I remember the web session lib for instance back then, a hack.
ctx.Value is made for each go routine scoped data, that's the whole point.
If it is an antipattern well, it is an antipattern designed by go designers themselves.
People who have takes like this have likely never zoomed out enough to understand how their software delivery ultimately affects the business. And if you haven't stopped to think about that you might have a bad time when it's your business.
Someone has to question the status quo. If we just did the same things there would be a lot less progress. The author took the time to articulate their argument, and publish it. I appreciate their effort even if I may not agree with their argument.
The author gave a pretty good reasoning why is it a bad idea, in the same section. However, for the demonstration purposes I think the they should have included their vision on how the request scoped data should be passed.
As I understand they propose to pass the data explicitly, like a struct with fields for all possible request-scoped data.
I personally don't like context for value passing either, as it is easy to abuse in a way that it becomes part of the API: the callee is expecting something from the caller but there is no static check that makes sure it happens. Something like passing an argument in a dictionary instead of using parameters.
However, for "optional" data whose presence is not required for the behavior of the call, it should be fine. That sort of discipline has to be enforced on the human level, unfortunately.
If you use a type like `map[string]any` then yes, it's going to be the same as Context. However, you can make a struct with fields of exactly the types you want.
It won't propagate to the third-party libraries, yes. But then again, why don't they just provide an explicit way of passing values instead of hiding them in the context?
> why don't they just provide an explicit way of passing values instead of hiding them in the context?
Hiding them in a context is the explicit way of passing values through oblivious third-party libraries.
In some future version of Go, it would be nice to just have dynamic scoping. But this works now, and it’s a good pattern. The only real issue is the function-colouring one, and that’s solvable by simply requiring that every exported function take a context.
Precisely because you need to be able to pass it through third party libraries and into callbacks on the other side where you need to recover the values.
Yeah most people talking here are unlikely to have worked on large scale Go apps.
Managing a god-level context struct with all the fields that ever could be relevant and explaining what they mean in position independent ways for documentation is just not scalable at all.
Import cycles mean you’re forced into this if you want to share between all your packages, and it gets really hairy.
We effectively use this approach in most of our go services. Other than logging purposes, we sometimes use it to pass stuff that is not critical but highly useful to have, like some request and response bodies from HTTP calls, tenant information and similar info.
As others have already mentioned, there won't be a Go 2. Besides, I really don't want another verbose method for cancellation; error handling is already bad enough.
Contexts in Go are generally used for convenience in request cancellation, but they're not required, and they're not the only way to do it. Under the hood, a context is just a channel that's closed on cancellation. The way it was done before contexts was pretty much the same:
func CancellableOp(done chan error /* , args... */) {
for {
// ...
// cancellable code:
select {
case <-something:
// ...
case err := <-done:
// log error or whatever
}
}
}
Some compare context "virus" to async virus in languages that bolt-on async runtime on top of sync syntax - but the main difference is you can compose context-aware code with context-oblivious code (by passing context.Background()), and vice versa with no problems. E.g. here's a context-aware wrapper for the standard `io.Reader` that is completely compatible with `io.Reader`:
type ioContextReader struct {
io.Reader
ctx context.Context
}
func (rc ioContextReader) Read(p []byte) (n int, err error) {
done := make(chan struct{})
go func() {
n, err = rc.Reader.Read(p)
close(done)
}()
select {
case <-rc.ctx.Done():
return 0, rc.ctx.Err()
case <-done:
return n, err
}
}
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
defer cancel()
rc := ioContextReader{Reader: os.Stdin, ctx: ctx}
// we can use rc in io.Copy as it is an io.Reader
_, err := io.Copy(os.Stdout, rc)
if err != nil {
log.Println(err)
}
}
For io.ReadCloser, we could call `Close()` method when context exits, or even better, with `context.AfterFunc(ctx, rc.Close)`.
Contexts definitely have flaws - verbosity being the one I hate the most - but having them behave as ordinary values, just like errors, makes context-aware code more understandable and flexible.
And just like errors, having cancellation done automatically makes code more prone to errors. When you don't put "on-cancel" code, your code gets cancelled but doesn't clean up after itself. When you don't select on `ctx.Done()` your code doesn't get cancelled at all, making the bug more obvious.
You are half right. A context also carries a deadline. This is important for those APIs which don't allow asynchronous cancellation but which do support timeouts as long as they are set up in advance. Indeed, your ContextReader is not safe to use in general, as io.ReadCloser does not specify the effect of concurrent calls to Close during Read. Not all implementations allow it, and even when they do tolerate it, they don't always guarantee that it interrupts Read.
This works, but goes against convention in that (from the context package docs) you shouldn’t “store Contexts inside a struct type; instead, pass a Context explicitly to each function that needs it.”
> What will go wrong if one stores a Context in a struct?
Contexts are about the dynamic contour, i.e. the dynamic call stack. Storing the current context in a struct and then referring to it in some other dynamic … context … is going to lead to all sorts of pain: timeouts or deadlines which have already expired and/or values which are no longer pertinent.
While there are some limited circumstances in which it may be appropriate, in general it is a very strong code smell. Any code which passes a context should receive a context. And any code which may pass a context in the future should receive one now, to preserve API compatibility. So any exported function really should have a context as its first argument for forwards-compatibility.
This guidance is actually super important, as contexts are expected to be modified in a code flow and apply to all functions that are downstream of your current call stack.
If you store contexts on your structs it’s very likely you won’t thread them correctly, leading to errors like database code not properly handling transactions.
Actually super fragile and you should avoid doing this as much as is possible. It’s never a good idea!
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Second)
reader := ioContextReader(ctx, r)
...
ctx, cancel := context.WithTimeout(ctx, 1*time.Second)
ctx = context.WithValue(ctx, "hello", "world")
...
func(ctx context.Context) {
reader.Read() // does not time out after one second, does not contain hello/world.
...
}(ctx)
Of course not - you're not handling the context at all in the called function. What's there to consider, reader.Read() has no idea about your timeout and value store intent. How would it, telepathy?
Changing the interface 1) is obviously not relevant.
Re-wrapping works only for the toy example. In the real world, the reader isn't some local variable, but there could be many, across different structs, behind private fields.
To cirle back, and not focus too much on the io.Reader example: the virality of ctx is real, and making wrapper structs is not a good solution. Updating stale references may not be possible, and would quickly become overwhelming. Not to forget the performance overhead.
Personally I think it's okay, go is fine as a "webservices" language. The go gospel is, You can have your cake and eat it too, but it's almost never true unless you twist the meaning of "cake" and "eat".
Yes, but this is just proof of concept. For any given case, you can optimize your approach to your needs. E.g. single goroutine ReadCloser:
type ioContextReadCloser struct {
io.ReadCloser
ctx context.Context
ch chan *readReq
}
type readReq struct {
p []byte
n *int
err *error
m sync.Mutex
}
func NewIoContextReadCloser(ctx context.Context, rc io.ReadCloser) *ioContextReadCloser {
rcc := &ioContextReadCloser{
ReadCloser: rc,
ctx: ctx,
ch: make(chan *readReq),
}
go rcc.readLoop()
return rcc
}
func (rcc *ioContextReadCloser) readLoop() {
for {
select {
case <-rcc.ctx.Done():
return
case req := <-rcc.ch:
*req.n, *req.err = rcc.ReadCloser.Read(req.p)
if *req.err != nil {
req.m.Unlock()
return
}
req.m.Unlock()
}
}
}
func (rcc *ioContextReadCloser) Read(p []byte) (n int, err error) {
req := &readReq{p: p, n: &n, err: &err}
req.m.Lock() // use plain mutex as signalling for efficiency
select {
case <-rcc.ctx.Done():
return 0, rcc.ctx.Err()
case rcc.ch <- req:
}
req.m.Lock() // wait for readLoop to unlock
return n, err
}
Again, this is not to say this is the right way, only that it is possible and does not require any shenanigans that e.g. Python needs when dealing with when mixing sync & async, or even different async libraries.
> It’s best not to acquire a mutex and launch a goroutine to read 3 bytes of data at a time.
io.Copy uses 32KB buffers. Other parts of standard library do too. If you're using Read() to read 3 bytes of data at a time, mutex is the least of your worries.
Since you seem to be ignoring sarcasm of my previous comment - just saying "don't do that" without suggesting an alternative way in the particular code context you're referring to, isn't useful at all. It's just annoying.
> It’s not my job to provide you with alternative code. That’s your job.
It is not your job to tell me "that is wrong", yet you do it because it's easy. Suggesting an alternative (not necessarily providing the code) is less easy, so you don't wanna do it. That's fine. I just want you to be aware that the former without the latter is pretty much useless.
It is also easy to tell you that you’re wrong if you were to post that you use your forehead to hammer nails into a post.
Posting detailed instructions on how to identify, purchase and utilise a hammer isn’t something I need to do, and doesn’t negate the correctness of the initial “don’t do that”.
You've been missing the point of bheadmaster's posts, which (as it seems to me) was to show that "you can compose context-aware code with context-oblivious code (by passing context.Background()), and vice versa with no problems". Bheadmaster gave some proof of concept code showing how to do that. The code might be somewhat inefficient, but that doesn't invalidate the point. If you think there's a more efficient way to compose context-aware code with context-oblivious code, then the best way to make that case would be to explain how to do so.
> This probably doesn’t happen often, but it’s prone to name collisions.
It's funny, it really was just using strings as keys until quite recently, and obviously there were collisions and there was no way to "protect" a key/value, etc.
Now the convention is to use a key with a private type, so no more collisions. The value you get is still untyped and needs to be cast, though. Also there are still many older libraries still uses strings.
> It’s very similar to thread-local storage. We know how bad of an idea thread-local storage is. Non-flexible, complicates usage, composition, testing.
I kind of do wish we had goroutine local storage though :) Passing down the context of the request everywhere is ugly.
I like explicit over implicit. I will take passing down context (in the sense of the concept, not the specific Go implementation) explicitly everywhere over implicit ("put it somewhere and I'll trust I can [probably, hopefully] get it back later") any day of the week.
I've seen plenty of issues in Java codebases where there was an assumption some item was in the Thread Local storage (e.g. to add some context to a log statement or metric) and it just wasn't there (mostly because code switched to a different thread, sometimes due to a "refactor" where stuff was renamed in one place but not in another).
Most recently ive been bit by this with datadog. The Python version does some monkeypatching to inject trace info. The go version you need to inject the trace info explicitly. While the latter takes more setup, it was much easier to understand what was going on and to debug when we ran into issues.
Sounds very familiar. I was a Java developer for a long time, and in that ecosystem adding a library to your project can be enough for code to be activated and run. There are plenty of libraries where the idea is: just include it, magic stuff will happen, and everything works! That is, until it doesn't work. And then you have to try and debug all this magic stuff of how Java automatically loads classes, how these classes are created and run, and what they do. Didn't happen very often, but when it happened usually a full week was wasted with this.
I really prefer spending a bit more time to set it up myself (and learn something about what I'm using in the process) and knowing how it works, than all the implicit magic.
This is why I avoid Python. I started doing Go after looking for few solutions written and Python and I couldn’t use it.
Some magic values inside objects of recursive depth changing dynamically at the runtime. After working for some time with functional languages and languages with non-mutable structures I’m afraid of such features today.
Context is nice because it’s explicit. Even function header spills the detail. `GetXFromName(context.Context, string)` already says that this call will do some IO/remote call and might never return or be subject of cancellation.
Context's spread just like exceptions do, the moment you introduce one it flies up and down all the functions to get where it needs to be. I can't help but think that local storage and operations for Go just like Threads have in Java would be a cleaner solution to the problem.
Was this solved? Is this context only a cancellation flag or does it do something more? The obvious solution for a cancellation trigger would be to have cancellation as an optional second argument. That's how it's solved in e.g. C#. Failing to pass the argument just makes it CancellationToken.None, which is simply never cancelled. So I/O without cancellation is simply foo.ReadAsync(x) and with cancellation it's foo.ReadAsync(x, ct).
Consider what happens in JavaScript when you declare a function as async. Now everything calling it is infected. Passing around runtime constructs like context in Go (AbortSignal in JS) or an allocator in Zig gives exactly the right level control back to the call and I love it. You can bail out of context propagation at any level of your program if that's your desire.
The major features that may have required a 2.0 were implemented in a backwards-compatible way, removing the utility of a Go 2.0.
Go 2.0 was basically a blank check for the future that said "We may need to break backwards compatibility in a big way". It turns out the Go team does not see the need to cash that check and there is no anticipated upcoming feature in the next several years that would require it.
The last one that I was sort of wondering about was the standard library, but the introduction of math/rand/v2 has made it clear the devs are comfortable ramping standard library packages without a Go 2. There are a number of standard libraries that I think could stand to take a v2; there aren't any that are so broken that it's worth a v2 to hard-remove them. (Except arguably syscall [1], which turns out it doesn't belong in the standard library because it can't maintain the standard library backwards compatibility and should have been in the extended standard library from the beginning, but that's been the way it is now for a long time and also doesn't rate a v2.)
(And again let me underline I'm not saying all the standard library is perfect. There is some brokenness here and there, for various definitions of "brokenness". I'm just saying it's not so broken that it's worth a v2 hard break at the language level and hard elimination of the libraries such that old code is forcibly broken and forced to update to continue on.)
The the introduction of Python 3 wasn't a mistake. The mistake was discontinuing Python 2.
Just look at how rust does it. Rust 1.0 code still works in the latest version of rustc, you just need to set the project to the Rust 2015 edition. You can even mix-and-match editions, as each crate can have a different edition. Newer versions of rustc will always support all previous editions, and breaking changes are only ever introduced when a new edition is released every 3 years. If the crate is stable, no real reason to upgrade, it will work forever. And if you do need to update a project, you can split it into multiple crates and do it incrementally.
Just imagine how much smoother the python 3 transition would have been if you could transition projects incrementally, module by module as needed.
It seems you are both saying the same thing. Had Python not introduced a line in the sand and instead continued to support Python 2 amid future updates there would have been no reason for Python 3. The Python 2 line could have kept improving instead.
Just as you say, Python could have introduced what is found in Python 3 without breaking Python 2 support. Which is the direction Go has settled on; hence why Go 2 is off the table. Go 1.0 and Go 1.23 are very different languages, but backwards version support is retained, so no need for a new major version.
The trouble with the Rust community is that it is terrible at communication. That may be why you extend a presupposition that everyone understands the meaningful difference between Rust editions and Go version directives, but I can't tell a difference beyond the frivolous like syntax used. Based on the documentation of each they seem like the exact same concept, with the exact same goals in mind. As a result, unfortunately, your point is not yet made. Perhaps you can break the cycle and describe for every day people how Rust editions are fundamentally different?
Editions allow making breaking changes to Rust without splitting the ecosystem - no hassle caused to existing code unless it opts into the new edition and its breaking changes. There's currently editions 2015, 2018, 2021, and 2024. When a new edition is introduced, it can make breaking changes such as introducing new keywords, but every previous edition remains supported forever by newer compiler versions.
The key part is that editions are configured per-library - libraries A and B might use editions 2015 and 2021, and your application could use edition 2018 and depend on those libraries, and it works.
If you wrote a library with the original 2015 edition, and never upgraded it to deal with new `async` and `await` keywords added in the 2018 edition, that's totally fine. Newer compilers will continue to compile it in its configured edition=2015 mode, where the new keywords don't exist (so your local variable named `async` still compiles), and new code written against newer editions may still use this 2015 edition library with no issue.
Editions are different from Go version directives because you use them to say "my library needs features added in this Go version", but they don't enable Go to make breaking changes to the language.
Editions can't do every kind of breaking change however - they mostly work for syntax level changes, and don't work for things like tearing out regrettable parts of the standard library.
> The key part is that editions are configured per-library - libraries A and B might use editions 2015 and 2021
In what way is that key? It still reads as being the same as the Go version directive. Obviously there are some differences in the implementation. For example, Go puts it in go.mod, while Rust puts it in Cargo.toml, but at a conceptual level I fail to see any fundamental difference. As you describe it, and how the documentation describes it, they attempt to accomplish the same thing for the same reason.
But, as phire puts it, they are "very different". But I don't see how. The carrying on of the tradition of the Rust community being horrible at communication carries on, I'm afraid. As before, you are going to have to speak to those who aren't deep in the depths of programming languages. Dumb it down for the reader who uses PHP and who has never touched Go or Rust in their life.
> they don't enable Go to make breaking changes to the language.
What, exactly, do you mean? The change to loop variable semantics comes to mind that was clearly a breaking change to the language, but gracefully handled with the version directive. What purpose are you under the impression the directive serves if not for dealing with breaking changes?
For example, https://doc.rust-lang.org/edition-guide/rust-2021/warnings-p... - code that produced a lint warning in the 2018 edition produces a compiler error in the 2021 edition. That would be something that can't be done in a backwards compatible way without editions
Another example would be changes to the import syntax https://doc.rust-lang.org/edition-guide/rust-2018/path-chang... - the compiler will forever support the 2015 behavior in crates that use the 2015 edition, but crates using newer editions can use the newer behavior
As stated before, the documentation in both languages was already consulted. It did not clear up how Rust is any different than Go in this regard. Consider a simple example from the Go documentation: Support for numeric underscores, which was not a part of the original language and later included in a 'new edition' of Go.
i := 10_000_000
Using the 1.13 or later version of the gc compiler, if your go.mod specifies anything after 1.12 the above compiles fine. But if go.mod asserts go 1.12 or earlier, you will get a compiler error from the above code as the compiler reverts to 1.12 or earlier behaviour based on the version directive. That sounds exactly like what you described! And, like I said before, Rust's documentation too echoes to my read that editions accomplish basically the same thing and exist for the same reason Go version directives exist.
But the earlier commenter indicated that they are very different. So, unfortunately, you have again failed to break the cycle. We need something dumbed down for us regular people, not something directed at those who walk, talk, and sleep Rust.
Sorry to have disappointed you. I don't walk, talk, or sleep either Rust or Go, but was trying to provide some resources to help in case you hadn't seen them yet.
One difference I noticed in the docs is in the Go Reference it says the "go" line of the "go.mod" has to be greater than or equal to the go line of all that modules dependencies, if the go line is 1.21 or higher, so a module for 1.21 can't depend on a module for 1.22 [1]
That restriction doesn't apply for Rust, a library using the 2015 edition can use a dependency that uses the 2018 edition, for example.
That's just one difference I noticed in the implementation. The goals seem very similar if not the same
Thanks for trying. But it is the "which is very different to what go has now settled on" that we are trying to get to the bottom of. It appears from your angle that you also conclude that Go has settled on the very same thing, frivolous implementation details aside. Hopefully phire will still return to dumb it down for us.
Ok... the only thing that go version directives do is selectively enable new features. Essentially, they are only really there to help you ensure your code will continue compiling in older versions of the compiler. Hell, until recently it wouldn't even throw an error if you tried to compile code with a future version directive.
The actual backwards compatibility in go is achieved by never removing functionality or syntax. New versions can only ever add new features/syntax. If there was a broken function in the API, that function needs to stick around forever, and they will be forced to add a second version of that function that now does the correct thing.
So, you can take code written for go 1.0, slap on a "go 1.23" directive and it will compile just fine. That's the guarantee that go provides. Well, mostly. There are a few examples of go 1.0 code that doesn't compile anymore, even when you use a "go 1.0" directive.
But not being able to remove anything ever is limiting.
A good example of how this can be limiting is reserved keywords. Go has a fixed set of reserved keywords that they picked for 1.0, and they can never reserve any more, any code using them as identifiers will break. Any new feature needs to be carefully designed to never need a new reserved keyword. Either they reuse an existing keyword (which c++ does all the time), or they use symbols instead.
But rust can reserve new keywords. The 2018 edition of rust reserved "async" and "await" for future async functionality and "try" for a potential try block. Rust did reserve "yield" from the start for generators, but decided they needed a way to mark a function as a generator, so in the 2024 edition, "gen" is now a reserved keyword, breaking any code that uses gen as a function/variable name.
Do note that rust also follows the go strategy within an edition. There is only one new edition every three years, and it would be a pain if all new features had to wait for a new edition. So the async/await keywords were reserved in the 2018 edition, but didn't actually get used until the end of 2019.
This means that just because your rust version supports the 2018 edition, doesn't mean it will compile all 2018 code. The editions are just for breaking changes, and there is a seperate minimum "rust-version" field that's somewhat equivalent to go's "go 1.x" directive. Though, "rust-version" doesn't disable features, it's just there to provide a nice clean warning to users on old compilers. Ideally in the future it will gain the ability to selectively disable language features (as rust already has extensive support for selectively enabling experimental language features in nightly builds, which we haven't even talked about here).
Basically, rust editions allow all breaking changes to be bundled up and applied once every three years. They also provides a way for compilers to continue supporting all previous editions, so old code will continue to work by picking an old version. While go's version directive looks superficially similar to editions, it is there for a different reason and doesn't actually allow for breaking changes.
> The actual backwards compatibility in go is achieved by never removing functionality or syntax.
The previous version removed functionality related to loop variables, so that is not strictly true. You might be right that the project doesn't take change lightly. There has to be a very compelling reason to justify such change, and why would it be any other way? If something isn't great, but still gets the job done, there is no reason to burden developers with having to learn a new language.
Go is not exactly the most in-depth language ever conceived. There is not much functionality or syntax that could be removed without leaving it inoperable. But there is no technical reason why it couldn't. The mechanics to allow it are already there, and it doesn't even violate the Go 1 guarantee to do so under the operation of those mechanics.
So, sure, it is fair to say that there is a social reason for Go making as few breaking/incompatible changes as is possible, but we were talking about the technology around allowing breaking/incompatible changes to co-exist and how the same concept could have been applied to Python. In Rust community fashion, I am not sure you have have improved on the horrible communication. We recognized that there is some difference in implementation details right from the onset, but the overall concept still seems to be the same in both cases to me.
Again, we need it dumbed down for the every day average person. Your audience doesn't eat monads for breakfast like your expression seems to believe.
> Go 1.22, for example, removed functionality related to loop variables, so that is not strictly true.
Ah, interesting. I searched but I couldn't find any example of go actually making a breaking change. Rust has a massive document [1] documenting every single breaking in single document. With go you kind of have to dig through the release notes of each version.
So, maybe Golang are relaxing their stance slightly on backwards compatibility, now that they have a mechanism that does kind of work. Which is good, I encurage that. But their offical stance is still that most code from go 1.0 should work without issues.
> there is no reason to burden developers with having to learn a new language.
To be clear, many of the breaking changes in Rust editions are the same kind of thing as that go loop example. Edge cases where it's kind of obvious that should have always worked that way, but it didn't.
The average programmer will barely notice the changes between editions, they won't have to re-learn anything. The major changes to the language usually come in regular feature releases; They are optional additions, just like go.
While the edition mechanism could be used to make changes as big as Python 3, so far it hasn't and I don't think it ever will. Decent chance there will never be a "rust 2" either.
> In Rust community fashion, I am not sure you have have improved on communicating the difference
Sigh... you aren't wrong:
----------
The difference is more attitude than anything else.
Go strives to never have breaking changes, even if they are forced to bend that rule sometimes. They do have a mechanism that allows breaking changes, but seems to be a reasonably recent innovation (before go 1.21, it wouldn't even error when encountering a future go version directive, and they didn't use it for a breaking change until 1.22)
Rust accepts that it needs to do breaking changes sometimes, and has developed mechanisms to explicitly allow it, and make it as smooth as possible. And this mechanism works very well.
----------
BTW, I'm not even saying go's stance is wrong. Rust needs this mechanism because it's a very ambitious language. It never would have reached a stable 1.0 unless it recognised the need for breaking changes.
Go is a much simpler language, with a different target market and probably should be aiming to minimise the need for breaking changes.
My original point is that "never make breaking changes" is the wrong lesson to take away from Python 3. And that Rust's editions provide a very good example of how to do breaking changes correctly.
> My original point is that "never make breaking changes" is the wrong lesson to take away from Python 3. And that Rust's editions provide a very good example of how to do breaking changes correctly.
Here we go again, but the point of editions, as far as I can tell, is so that there are no breaking changes. A value Go also holds. As a result, both projects are still at version 1 and will likely always forever be at version 1.
So, if we round back to the start of our discussion, if Python 2 had taken the same stance, there would never be a Python 3. What we know of as Python 3 today would just be another Python 2 point release. Which is what the earlier commenter that started all this was saying – Go will not move to Go 2, and Rust won't move to Rust 2, because nobody wants to make the same mistake Python did.
I understand you have an advertising quota to fill, but introducing Rust into the discussion was conversationally pointless.
I'd say Rust editions are more like going from Python 2.x to Python 2.y, than the 2->3 migration. The Rust standard library is still the same (and the string type is still "valid UTF-8") no matter the edition (this is why you can mix-and-match editions), the edition differences are mostly on the syntax.
> Just imagine how much smoother the python 3 transition would have been if you could transition projects incrementally, module by module as needed.
That would require manually converting strings to the correct type at each module boundary (you can't do it automatically, because on the Python 2.x side, you don't know whether or not a string has already been decoded/encoded into a specific character encoding; that is, you don't know whether a Python 2.x string should be represented by a "bytes" or a "str" on the Python 3.x side). That's made even harder by Python's dynamic typing (you can't statically look at the code and point all the places which might need manual review).
> First things first, let’s establish some ground. Go is a good language for writing servers, but Go is not a language for writing servers. Go is a general purpose programming language, just like C, C++, Java or Python
Really? Even years later in 2025, this never ended up being true. Unless your definition of 'general purpose' specifically excludes anything UI-related, like on desktop, web or mobile, or AI-related.
I know it's written in 2017, but reading it now in 2025 and seeing the author comparing it to Python of all languages in the context of it's supposed 'general purpose'ness is just laughable. Even Flutter doesn't support go. granted, that seems like a very deliberate decision to justify Dart's existence.
> "Go is a programming language designed by Google to help solve Google's problems [...] More than most general-purpose programming languages, Go was designed to address a set of software engineering issues that we had been exposed to in the construction of large server software."
In an alternative timeline, had Rust 1.0 been available when Docker pivoted away from Python into Go, and Kubernetes from Java into Go, due to having Go folks pushing for the rewrite, and most likely they would have been taken by RIIR instead, nowadays spreading across Python and JavaScript ecosystem, including rewriting tools originally written in Go.
> Unless your definition of 'general purpose' specifically excludes anything UI-related, like on desktop, web or mobile, or AI-related.
By that definition no language is general purpose. There is no language today that excels in GUI (desktop/mobile), web development, AI, cloud infrastructure, and all the other stuff like systems, embedded...And all at the same time.
For instance I have never seen or heard of a successful Python desktop app (or mobile for that matter).
I think the whole argument here is silly, but I do know kitty (terminal) and Calibre (ebook manager) are two rather popular cross platform python desktop apps.
I find "CancellationToken" in VSCode extension APIs quite clear and usable, and not overly complicated. Wonder if anyone has done a conparison of Go's context and CancellationToken.
Yeah, .NET developers have been passing CancellationTokens around in the places where they have needed them for 15 years. The tokens are basically invisible until their existence emerges when someone decides they want to cancel a long-running API call or something. At that point, they are plumbed as deeply as seems fit for the problem at hand and then hardly thought about ever again. CancellationTokens are generally a delightful pattern, especially when the language allows sensible defaults.
Context is useful in many cases. In go I have to pass ctx from func to func. In nodejs I can easily create&use context by using AsyncLocalStorage (benefit of single-thread).
> If you use ctx.Value in my (non-existent) company, you’re fired
I was unsuccessful to convey the same message in my previous company (apart from being fired part). All around the codebase you'd see function with official argument and unofficial ones via ctx that would panic everything if you forgot it was used 3 layers down (not kidding). The only use case I've seen so far that is not terrible of context value is if you have a layer of opentelemetry as it makes things transparent and as a caller you don't have to give a damn how the telemetry is operated under the hood.
new solution should be: Simple and elegant. Optional, non-intrusive and non-infectious. Robust and efficient. Only solves the cancelation problem.
okay... so they dodged the thing I thought was going to be interesting, how would you solve passing state? e.g. if I write a middleware for net/http, I have to duplicate the entire http.Request, and add my value to it.
I agree so strongly with this piece. Go’s context lib is essential, confusing, functional, and should be handled at the language level, but like this author I also have no ideas for what the design should be.
This is about an explicit argument of type "Context". I'm not a Go user, and at first I thought it was about something else: an implicit context variable that allows you to pass stuff deep down the call stack, without intermediate functions knowing about it.
React has "Context", SwiftUI has "@Environment", Emacs LISP has dynamic scope (so I heard). C# has AsyncLocal, Node.JS AsyncLocalStorage.
This is one of those ideas that at first seem really wrong (isn't it just a global variable in disguise?) but is actually very useful and can result in cleaner code with less globals or less superfluous function arguments. Imagine passing a logger like this, or feature flags. Or imagine setting "debug = True" before a function, and it applies to everything down the call stack (but not in other threads/async contexts).
Implicit context (properly integrated into the type system) is something I would consider in any new language. And it might also be a solution here (altough I would say such a "clever" and unusual feature would be against the goals of Go).
We added exactly this feature to Arc* and it has proven quite useful. Long writeup in this thread:
https://news.ycombinator.com/item?id=11240681 (March 2016)
* the Lisp that HN is written in
Passing the current user ID/tenant ID inside ctx has been super useful for us. We’re already using contexts for cancellation and graceful termination, so our application-layer functions already have them. Makes sense to just reuse them to store user and tenant IDs too (which we pull from access tokens in the transport layer).
We have DB sharding, so the DB layer needs to figure out which shard to choose. It does that by grabbing the user/tenant ID from the context and picking the right shard. Without contexts, this would be way harder—unless we wanted to break architecture rules, like exposing domain logic to DB details, and it would generally just clutter the code (passing tenant ID and shard IDs everywhere). Instead, we just use the "current request context" from the standard lib that can be passed around freely between modules, with various bits extracted from it as needed.
What’s the alternatives, though? Syntax sugar for retrieving variables from some sort of goroutine-local storage? Not good, we want things to be explicit. Force everyone to roll their own context-like interfaces, since a standard lib's implementation can't generalize well for all sitiations? That’s exactly why contexts we introduced—because nobody wanted to deal with mismatched custom implementations from different libs. Split it into separate "data context" and "cancellation context"? Okay, now we’re passing around two variables instead of one in every function call. DI to the rescue? You can hide userID/tenantID with clever dependency injection, and that's what we did before we introduced contexts to our codebase, but that resulted in allocations of individual dependency trees for each request (i.e. we embedded userID/tenantID inside request-specific service instances, to hide the current userID/tenantID, and other request details, from the domain layer to simplify domain logic), and it stressed the GC.
An alternative is to add all dependencies explicitly into function argument list or object fields, instead of using them implicitly from the context, without documentation and static typing. Including logger.
I already talked about it above.
Main problems with passing dependencies in function argument lists:
1) it pollutes the code and makes refactoring harder (a small change in one place must be propagated to all call sites in the dependency tree which recursively accept user ID/tenant ID and similar info)
2) it violates various architectural principles, for example, from the point of view of our business logic, there's no such thing as "tenant ID", it's an implementation detail to more efficiently store data, and if we just rely on function argument lists, then we'd have to litter actual business logic with various infrastructure-specific references to tenant IDs and the like so that the underlying DB layer could figure out what to do.
Sure, it can be solved with constructor-based dependency injection (i.e. request-specific service instances are generated for each request, and we store user ID/tenant ID & friends as object fields of such request-scoped instances), and that's what we had before switching to contexts, but it resulted in excessive allocations and unnecessary memory pressure for our highload services. In complex enterprise code, those dependency trees can be quite large -- and we ended up allocating huge dependency trees for each request. With contexts, we now have a single application-scoped service dependency tree, and request-specific stuff just comes inside contexts.
Both problems can be solved by trying to group and reuse data cleverly, and eventually you'll get back to square one with an implementation which looks similar to ctx.Context but which is not reusable/composable.
>Including logger.
We don't store loggers in ctx, they aren't request-specific, so we just use constructor-based DI.
I believe this problem isn't solvable under our current paradigm of programming, which I call "working directly on plaintext, single-source-of-truth codebase".
Tenant ID, cancellations, loggers, error handling are all examples of cross-cutting concerns. Depending on what any given function does, and what you (the programmer) are interested in at a given moment, any of them could be critical information or pure noise. Ideally, you should not be seeing the things you don't care about, but our current paradigm forces us to spell out all of them, at all times, hurting readability and increasing complexity.
On the readability/"clean code", our most advanced languages are operating on a Pareto frontier. We have whole math fields being employed in service of packaging up common cross-cutting concerns, as to minimize the noise they generate. This is where all the magic monads come from, this is why you have to pay attention to infectious colors of your functions, etc. Different languages make slightly different trade-offs here, to make some concerns more readable, but since it's a Pareto frontier, it always makes some other aspects of code less comprehensible.
In my not so humble opinion, we won't progress beyond this point until we give up on the paradigm itself. We need to accept that, at any given moment, a programmer may need a different perspective on the code, and we need to build tools to allow writing code from those perspectives. What we now call source code should be relegated to the role of intermediary/object code - a single source of truth for the bowels of the compiler, but otherwise something we never touch directly.
Ultimately, the problem of "context" is a problem of perspective, and should be solved by tooling. That is, when reading or modifying code, I should be able to ignore any and all context I don't care about. One moment, I might care about the happy path, so I should be able to view and edit code with all error propagation removed; at another moment, I might care about how all the data travels through the module, in which case I want to see the same code with every single goddamn thing spelled out explicitly, in the fashion GP is arguing to be the default. Etc.
Plaintext is fine. Single source of truth is fine. A single all-encompassing view of everything in a source file is fine. But they're not fine all together, all the time.
Monads but more importantly MonadTransformers so you can program in a legible fashion.
However, there's a lot of manual labour to stuff everything into a monad, and then extract it and pattern match when your libraries don't match your choice of control flow monad(s)!
This is where I'd prefer if compilers could come in.
Imagine being in the bowels of a DB lib, and realising that the function you just write might be well positioned to terminate the TCP connection that it's using to talk to the database with. Oh no: now you have to update the signature and every single call-site for its parent, and its parent, and...
Instead, it would be neat if the compiler could treat things you deem cross-cutting as a graph traversal problem instead; call a cancelable method and all callers are automatically cancelable. Decisions about whether to spawn a cancelable subtree, to 'protect' some execution or set a deadline is then written on an opt-in basis per function; all functions compose. The compiler can visualise the tree of cancellation (or hierachical loggers, or OT spans, or actors, or green fibers, or ...) and it can enforce the global invariant that the entry-point captures SIGINT (or sets up logging, or sets up a tracer, or ...).
So imagine the infrastructure of a monad transformer, but available per-function on an opt-in basis. If you write your function to have a cleanup on cancellation, or write logs around any asynchronous barrier, the fiddly details of stuffing the monad is done by the compiler and optionally visualised and explained in the IDE. Your code doesn't have to opt-in, so you can make each function very clean.
Yes, there's plenty of space for automation and advanced support from tooling. Hell, not every perspective is best viewed as plaintext; in particular, anything that looks like a directed graph fundamentally cannot be well-represented in plaintext at all without repeating nodes, breaking the 1:1 correspondence between a token and a thing represented by that token.
Still, I believe the core insight here is that we need different perspectives at different times. Using your example, most of the time I probably don't care whether the code is cancellable or not. Any mention of it is distracting noise to me. But other times - perhaps next day, or perhaps just five minutes later, I suddenly need to know whether the code is cancellable, and perhaps I need to explicitly opt out of it somewhere. It's highly likely that in those cases, I may not care about things like error handling logic and passing around session identifiers, and I would like that to disappear in those moments, etc.
And hell, I might need an overview of the which code is or isn't protected, and that would be best served by showing me an interactive DAG of functions that I can zoom around and expand/collapse, so that's another kind of perspective. Etc.
EDIT:
And then there's my favorite example: the unending holy war of "few fat functions" vs. "lots of tiny functions". Despite the endless streams of Tweets and articles arguing for either, there is no right choice here - there's no right trade-off you can make here up front, and can never be, because which one is more readable depends strictly on why you're reading it. E.g. lots of tiny functions reduce duplication and can introduce a language you can use to effectively think about some code at a higher level - but if there's a thorny bug in there I'm trying to fix, I want all of that shit inlined into one, big function, that I can step through sequentially, following the actual execution order.
It is my firm belief that the ability to inline and uninline code on the fly, for yourself, personally, without affecting the actual execution or the work of other developers, is one of the most important missing piece in our current tooling, and making it happen is a good first step towards abandoning The Current Paradigm that is now suffocating us all.
Second one would be, along with inlining, the ability to just give variables and parameters fixed values when reading, and have those values be displayed and propagated through the code - effectively doing a partial simulation of code execution. Being able to do it ad hoc, temporarily, would be a huge aid in quickly understanding what some code does.
Promises are so incredibly close to being a representation of work.
The OS has such sophisticated tools for process management, but inside a process there are so many subprocesses going on, & it feels like we are flailing about with poorly managed process like things. (Everyone except Erlang.)
I love how close zx comes to touching the sky here. It's a typescript library for running processes, as a tagged template function returning a promise. *const hello = $`sleep 3; echo hello world`. But the promise isnt just a "a future value", it is A ProcessPromise for interacting with the promise.
I so wish promises were just a little better. It feels like such a bizarre tragedy to me the "a promise is a future value" not a thing unto itself won the day in es6 / es2015, destroyed the possibility of a promise being more; zx has run into a significant number of ergonomic annoyances because this small world dogma.
How cool it would be to see this go further. I'd love for the language to show what promises if any this promise is awaiting! I long for that dependency graph of subprocesses to start to show itself, not just at compile time but for the runtime to be able to actively observe and manage the subprocesses within it at runtime. We keep building workflow engines, build robust userland that manage their own subprocesses, user user lands, but the language itself seems so close & yet so far from letting the simple promise become more a process, and that seems like a sad shame.
> it violates various architectural principles, for example, from the point of view of our business logic, there's no such thing as "tenant ID"
I'm not sure I understand how hiding this changes anything. Could you just not pass "tenant ID" to doBusinessLogic function and pass it to saveToDatabase function?
That's exactly what what they're talking about, "tenantId" shouldn't be in the function signature for functions that aren't concerned with the tenant ID, such as business logic
But they chose a solution (if I understand correctly), where tenant ID is not in the signature of functions that use it, either.
I have a feeling, if Context disappears, you'll just see "Context" becoming a common struct that is passed around. In Python, unlike in C# and Java, the first param for a Class Method is usually the class instance itself, it is usually called "self" so I could see this becoming the norm in Go.
Under the hood, in both Java and C# the first argument of an instance method is the instance reference itself. After all, instance methods imply you have an instance to work with. Having to write 'this' by hand for such is how OOP was done before OOP languages became a thing.
I agree that adopting yet another pattern like this would be on brand for Go since it prizes taking its opinionated way of going about everything in a vintage kind of way over being practical and convenient.
As a newcomer to Go, a lot of their design decisions made a lot of sense when I realized that a lot of the design is based around this idea of "make it impossible to do something that could be dumb in some contexts".
For example, I hate that there's no inheritance. I wish I could create a ContainerImage object and then a RemoteContainerImage subclass and then QuayContainerImage and DockerhubContainerImage subclasses from those. However, being able to do inheritance, and especially multiple inheritance, can lead to awful, idiotic code that is needlessly complicated for no good reason.
At a previous job we had a script that would do operations on a local filesystem and then FTP items to a remote. I thought okay, the fundamental paradigms of FTP and SFTP-over-SSH via the paramiko module are basically identical so it should be a five minute job to patch it in, right?
Turns out this Python script, which, fundamentally, consisted of "take these files here and put them over there" was the most overdesigned piece of garbage I've ever seen. Clean, effective, and entirely functional code, but almost impossible to reason about. The code that did the actual work was six classes and multiple subclasses deep, but assumptions were baked in at every level. FTP-specific functionality which called a bunch of generic functionality which then called a bunch of FTP-specific functionality. In order to add SFTP support I would have had to effectively rewrite 80% of the code because even the generic stuff inherited from the FTP-specific stuff.
Eventually I gave up entirely and just left it alone; it was too important a part of a critical workflow to risk breaking and I never had the time or energy to put my frustration aside. Golang, for all its flaws, would have prevented a lot of that because a lot of the self-gratification this programmer spent his time on just wouldn't have been possible in Go for exactly this reason.
> As a newcomer to Go, a lot of their design decisions made a lot of sense when I realized that a lot of the design is based around this idea of "make it impossible to do something that could be dumb in some contexts".
You are indeed a newcomer :) God bless you to shoot feet only in dev environments.
It sounds like you may have some friction-studded history with Go. Any chance you can share your experience and perspective with using the language in your workloads?
> instead of using them implicitly from the context, without documentation and static typing
This is exactly what context is trying to avoid, and makes a tradeoff to that end. There's often intermediate business logic that shouldn't need to know anything about logging or metrics collection or the authn session. So we stuff things into an opaque object, whether it's a map, a dict, a magic DI container, "thread local storage", or whatever. It's a technique as old as programming.
There's nothing preventing you from providing well-typed and documented accessors for the things you put into a context. The context docs themselves recommend it and provide examples.
If you disagree that this is even a tradeoff worth making, then there's not really a discussion to be had about how to make it.
I disagree that it's a good approach. I think that parameters must be passed down always, as parameters. It allows compiler to detect unused parameters and it removes all implicitness.
It is verbose indeed and may be there should be programming language support to reduce that verbosity. Some languages support implicit parameters which proved to be problematic but may be there should be more iterations on that manner.
I consider context for passing down values to do more harm than good.
It's nothing to do with verbosity, which is why I didn't mention it.
You can't add arguments to vendor library functions. It's super convenient to have contexted logging work for any logging calls.
Other responses cover this well, but: the idea of having to change 20 functions to accept and propagate a `user` field just so that my database layer can shard based on userid is gross/awful.
...but doing the same with a context object is also gross/awful.
> an implicit context variable that allows you to pass stuff deep down the call stack, without intermediate functions knowing about it. [...] but is actually very useful and can result in cleaner code with less globals or less superfluous function arguments. [...] and it applies to everything down the call stack (but not in other threads/async contexts).
In my experience, these "thread-local" implicit contexts are a pain, for several reasons. First of all, they make refactoring harder: things like moving part of the computation to a thread pool, making part of the computation lazy, calling something which ends up modifying the implicit context behind your back without you knowing, etc. All of that means you have to manually save and restore the implicit context (inheritance doesn't help when the thread doing the work is not under your control). And for that, you have to know which implicit contexts exist (and how to save and restore them), which leads to my second point: they make the code harder to understand and debug. You have to know and understand each and every implicit context which might affect code you're calling (or code called by code you're calling, and so on). As proponents of another programming language would say, explicit is better than implicit.
They're basically dynamic scoping and it's both a very useful and powerful and very dangerous feature ... scheme's dynamic-wind model makes it more obvious when the particular form of magic is in use but isn't otherwise a lot different.
I would like to think that somebody better at type systems than me could provide a way to encode it into one that doesn't require typing out the dynamic names and types on every single function but can instead infer them based on what other functions are being called therein, but even assuming you had that I'm not sure how much of the (very real) issues you describe it would ameliorate.
I think for golang the answer is probably "no, that sort of powerful but dangerous feature is not what we're going for here" ... and yet when used sufficiently sparingly in other languages, I've found it incredibly helpful.
Trade-offs all the way down as ever.
Basically you'd be asking for inferring a record type largely transparently. That's going to quickly explode to the most naive form because it's very hard to tell what could be called, especially in Go.
I don't think you could fit it to Go, no.
But see https://hackage.haskell.org/package/effectful for work in the general area that seems rather promising.
I haven't seen it mentioned yet, but Odin also has an implicit `context` variable:
https://odin-lang.org/docs/overview/#implicit-context-system
> React has "Context", SwiftUI has "@Environment", Emacs LISP has dynamic scope (so I heard). C# has AsyncLocal, Node.JS AsyncLocalStorage.
Emacs Lisp retains dynamic scope, but it's no longer a default for some time now, in line in other Lisps that remain in use. Dynamic scope is one of the greatest features in Lisp language family, and it's sad to see it's missing almost everywhere else - where, as you noted, it's being reinvented, but poorly, because it's not a first-class language feature.
On that note, the most common case of dynamic scope that almost everyone is familiar with, are environment variables. That's what they're for. Since most devs these days are not familiar with the idea of dynamic scope, this leads to a lot of peculiar practices and footguns the industry has around environment variables, that all stem from misunderstanding what they are for.
> This is one of those ideas that at first seem really wrong (isn't it just a global variable in disguise?)
It's not. It's about scoping a value to the call stack. Correctly used, rebinding a value to a dynamic variable should only be visible to the block doing the rebinding, and everything below it on the call stack at runtime.
> Implicit context (properly integrated into the type system) is something I would consider in any new language.
That's the problem I believe is currently unsolved, and possibly unsolvable in the overall programming paradigm we work under. One of the main practical benefits of dynamic scope is that place X can set up some value for place Z down on the call stack, while keeping everything in between X and Z oblivious of this fact. Now, this is trivial in dynamically typed language, but it goes against the principles behind statically-typed languages, which all hate implicit things.
(FWIW, I love types, but I also hate having to be explicit about irrelevant things. Since whether something is relevant or not isn't just a property of code, but also a property of a specific programmer at specific time and place, we're in a bit of a pickle. A shorter name for "stuff that's relevant or not depending on what you're doing at the moment" is cross-cutting concerns, and we still suck at managing them.)
> Emacs Lisp retains dynamic scope, but it's no longer a default for some time now
https://www.gnu.org/software/emacs/manual/html_node/elisp/Va...
> By default, the local bindings that Emacs creates are dynamic bindings. Such a binding has dynamic scope, meaning that any part of the program can potentially access the variable binding. It also has dynamic extent, meaning that the binding lasts only while the binding construct (such as the body of a let form) is being executed.
It’s also not really germane to the GP’s comment, as they’re just talking about dynamic scoping being available, which it will almost certainly always be (because it’s useful).
Sorry, you're right. It's not a cultural default anymore. I.e. Emacs Lisp got proper lexical scope some time ago, and since then, you're supposed to start every new .elisp file with:
i.e. explicitly switching the interpreter/compiler to work in lexical binding mode.> against the principles behind statically-typed languages, which all hate implicit things
But many statically typed languages allow throwing exceptions of any type. Contexts can be similar: "try catch" becomes "with value", "throw" becomes "get".
Yes, but then those languages usually implement only unchecked exception, as propagating error types up the call tree is seen as annoying. And then, because there are good reasons you may want to have typed error values (instead of just "any"), there is now pressure to use result types (aka. "expected", "maybe") instead - turning your return type Foo into Result<Foo, ErrorType>.
And all that it does is making you spell out the entire exception handling mechanism explicitly in your code - not just propagating the types up the call tree, but also making every function explicitly wrapping, unwrapping and branching on Result types. The latter is so annoying that people invent new syntax to hide it - like tacking ? at the end of the function, or whatever.
This becomes even worse than checked exception, but it's apparently what you're supposed to be doing these days, so ¯\_(ツ)_/¯.
We could make explicit effect (context, error) declarations for public functions and inferred for private functions. Explicit enumeration of possible exceptions is required for stable APIs anyway.
raku's take on gradual typing may be to your taste; i likewise prefer to leave irrelevant types out and use maximally-expressive types where it makes sense¹. i feel this is helped by the insistence on sigils because you then know the rough shape of things (and thus a minimal interface they implement: $scalar, @positional, %associative, &callable) even when you lack their specific types. in the same vein, dynamically scoped variables are indicated with the asterisk as a twigil (second level sigil).
is a list (well, it does Positional anyway), while is a different variable that is additionally dynamically scoped.it's idiomatic to see
as a database handle to save passing it around explicitly, env vars are in things like that. it's nice to have the additional explicit reminder whenever you're dealing with a dynamic variable in a way the language checks for you and yells at you for forgetting.i would prefer to kick more of the complex things i do with types back to compile time, but a lot of static checks are there. more to the point, raku's type system is quite expressive at runtime (that's what you get when you copy common lisp's homework, after all) and helpful to move orthogonal concerns out into discrete manageable things that feel like types to use even if what they're doing is just a runtime branch that lives in the function signature. doing stuff via subset types or roles or coercion types means whatever you do plays nicely with polymorphic dispatch, method resolution order, pattern matching, what have you.
in fact, i just wrote a little entirely type level... thing? to clean up the body of an http handler that lifts everything into a role mix-in pipeline that runs from the database straight on through to live reloading of client-side elements. processing sensor readings for textual display, generating html, customizing where and when the client fetches the next live update, it's all just the same pipeline applying roles to the raw values from the db with the same infix operator (which just wraps a builtin non-associative operator to be left associative to free myself from all the parentheses).
not getting bogged down in managing types all the time frees you up to do things like this when it's most impactful, or at least that's what i tell myself whenever i step on a rake i should have remembered was there.
¹ or times where raku bubbles types up to the end-user, like the autogenerated help messages generated from the type signature of MAIN. i often write "useless" type declarations such as subset Domain-or-IP; which match anything² so that the help message says --host[=Domain-or-IP] instead of --host[=Str] or whatever
² well, except junctions, which i consider the current implementation of to be somewhat of a misstep since they're not fundamentally also a list plus a context. it's a whole thing. in any case, this acts at the level of the type hierarchy that you want anyway.
Scala has implicit contextual parameters: https://docs.scala-lang.org/tour/implicit-parameters.html.
I've always been curious about how this feature ends up in day to day operations and long term projects. You're happy with it ?
As a veteran of a large scala project (which was re-written in go, so I'm not unbiased), no. I was generally not happy.
This was scala 2, so implicit resolution lookup was a big chunk of the problem. There's nothing at the call site that tells you what is happening. But even when it wasn't hidden in a companion object somewhere, it was still difficult because every import change had to be scrutinized as it could cause large changes in behavior (this caused a non-zero number of production issues).
They work well for anything you would use environment variables for, but a chunk of the ecosystem likes to use them for handlers (the signature being a Functor generally), which was painful
> There's nothing at the call site that tells you what is happening.
A decent IDE highlights it at the call site.
It's definitely an abusable feature, but I find it very useful. In most other languages you end up having to have completely invisible parameters (e.g. database session bound to the thread) because it would be too cumbersome to pass them explicitly. In Scala you have a middle ground option between completely explicit and completely invisible.
I'm not sure what you consider a decent scala ide, but it was a problem with IntelliJ in several of our code bases, and I'd have to crawl the implicit resolution path.
I eventually opted to desugaring the scala completely, but we were already on the way out of scala by that point
> it was a problem with IntelliJ in several of our code bases
It shouldn't be, unless you were using macros (always officially experimental) or something - I was always primarily an Eclipse guy but IntelliJ worked well. Did you not get the green underline?
yeah that's what i thought, but maybe scala implicit param not being perfect will help finding a better linguistic trait (maybe they should enforce purity on these parameters)
IMO it is perfect, or at least better than anything else that's been found so far.
"Purity" means different things in different contexts. Ultimately you can give programmers more tools, but you can't get away from relying on their good judgement.
thanks a lot for your answer
Not OP, but I briefly seconded to a team that used Scala at a big tech co and I was often frustrated by this feature specifically. They had a lot of code that consumed implicit parameters that I was trying to call from contexts they were not available.
Then again I guess it's better than a production outage because the thread-local you didn't know was a requirement wasn't available.
Scala has everything, and therefore nothing.
> Implicit context (properly integrated into the type system) is something I would consider in any new language.
Those who forget monads are doomed to reinvent dozens of limited single-purpose variants of them as language features.
Algebraic effects and implicit arguments with explicit records are perfectly cromulent language features. GHC Haskell already has implicit arguments, and IIRC Scala uses them instead of a typeclass/trait system. The situation with extensible records in Haskell is more troublesome, but it’s more because of the endless bikeshedding of precisely how powerful they should be and because you can get almost all the way there with the existing type-system features except the ergonomics invariably suck.
It’s reasonable, I think, to want the dynamic scope but not the control-flow capabilities of monads, and in a language with mutability that might even be a better choice. (Then again, maybe not—SwiftUI is founded on Swift’s result builders, and those seem pretty much like monads by another name to me.) And I don’t think anybody likes writing the boilerplate you need to layer a dozen MonadReaders or -States on each other and then compose meaningful MonadMyLibraries out of them.
Finally, there’s the question of strong typing. You do want the whole thing to be strongly typed, but you don’t want the caller to write the entire dependency tree of the callee, or perhaps even to know it. Yet the caller may want to declare a type for itself. Allowing type signatures to be partly specified and partly inferred is not a common feature, and in general development seems to be backing away from large-scale type inference of this sort due to issues with compile errors. Not breaking ABI when the dependencies change (perhaps through default values of some sort) is a more difficult problem still.
(Note the last part can be repeated word for word for checked exceptions/typed errors. Those are also, as far as I’m aware, largely unsolved—and no, Rust doesn’t do much here except make the problem more apparent.)
Thread local storage means all async tasks (goroutines) must run in the same thread. This isn't how tasks are actually scheduled. A request can fan out, or contention can move parts of the computation between threads, which is why context exists.
Furthermore in Go threads are spun up at process start, not at request time, so thread-local has a leak risk or cleanup cost. Contexts are all releasable after their processing ends.
I've grown to be a huge fan of Go for servers and context is one reason. That said, I agree with a lot of the critique and would love to see an in-language solution, but thread-local ain't it.
A more correct term is "goroutine-local" storage, which Go _already_ has. It's used for pprof labels, they are even inherited when a new Goroutine is started.
And Java added ScopedValue in version 20 as a preview feature.
In Jetpack compose, the Composer is embedded by the compiler at build time into function calls
https://medium.com/androiddevelopers/under-the-hood-of-jetpa...
I’m still not sure how I feel about it. While more annoying, I think I’d like to see it, rather than just have magic behind the hood
seeing it is great. coming into a hairy monolith and having to plumb one variable through half a dozen layers to get to the creamy nougat later you actually wanted it in, is not. having to do that more than once it's why they invented the "magic" implicit context variable.
A good pitch for dynamic (context) variables is that they're not globals, they're like implicit arguments passed to all functions within the scope.
Personally I've used the (ugly) Python contextvars for:
- SQS message ID in to allow extending message visibility in any place in the code
- scoped logging context in logstruct (structlog killer in development :D)
I no longer remember what I used Clojure dynvars for, probably something dumb.
That being said, I don't believe that "active" objects like DB connection/session/transaction are good candidates for a context var value. Programmers need to learn to push side effects up the stack instead. Flask-SQLAlchemy is not correct here.
Even Flask's request object being context-scoped is a bad thing since it is usually not a problem to do all the dispatching in the view.
Yeah, I agree 100% with you. The thing with Golang is that it's supposed to be a very explicit language, so passing the context as an argument fits in with the rest of the language.
Nevertheless: just having it, be it implicit or explicit, beats having to implement it yourself.
> If you use ctx.Value in my (non-existent) company, you’re fired
This is such a bad take.
ctx.Value is incredibly useful for passing around context of api calls. We use it a lot, especially for logging such context values as locales, ids, client info, etc. We then use these context values when calling other services as headers so they gain the context around the original call too. Loggers in all services pluck out values from the context automatically when a log entry is created. It's a fantastic system and serves us well. e.g.
Let me try to take the other side:
`ctx.Value` is an `any -> any` kv store that does not come with any documentation, type checking for which key and value should be available. It's quick and dirty, but in a large code base, it can be quite tricky to check if you are passing too many values down the chain, or too little, and handle the failure cases.
What if you just use a custom struct with all the fields you may need to be defined inside? Then at least all the field types are properly defined and documented. You can also use multiple custom "context" structs in different call paths, or even compose them if there are overlapping fields.
Because you should wrapp that in a type safe function. You should not use the context.GetValue() directly but use your own function, the context is just a transport mechanism.
If it is just a transport mechanism, why use context at all ant not a typed struct?
Because dozens of in between layers don't need to know the type, and should in fact work regardless of the specific type.
Context tells you enough: someone, somewhere may do magic with this if you pass it down the chain.
And in good Go tradition it's explicit about this: functions that don't take a context don't (generally) do that kind of magic.
If anything it mixes two concerns: cancelation and dynamic scoping.
But I'm not sure having two different parameters would be better.
> `ctx.Value` is an `any -> any` kv store that does not come with any documentation, type checking for which key and value should be available
The docs https://pkg.go.dev/context#Context suggest a way to make it type-safe (use an unexported key type and provide getter/setter). Seems fine to me.
> What if you just use a custom struct with all the fields you may need to be defined inside?
Can't seamlessly cross module boundaries.
> `ctx.Value` is an `any -> any` kv store that does not come with any documentation, type checking for which key and value should be available.
On a similar note, this is also why I highly dislike struct tags. They're string magic that should be used sparingly, yet we've integrated them into data parsing, validation, type definitions and who knows what else just to avoid a bit of verbosity.
Most popular languages support annotations of one type or another, they let you do all that in a type safe way. It's Go that's decided to be different for difference sake, and produced a complete mess.
IMO Go is full of stuff like this where they do something different than most similar languages for questionable gains. `iota` instead of enums, implicit interfaces, full strings in imports (not talking about URLS here but them having string literal syntax), capitalization as visibility control come to mind immediately, and I'm sure there are others I'm forgetting. Not all of these are actively harmful, but for a language that touts "simplicity" as one of its core values, I've always found it odd how many different wheels Go felt the need to reinvent without any obvious benefit over the existing ones.
the second i tried writing go to solve a non-trivial problem the whole language collapsed in on itself. footguns upon footguns hand-waved away with "it's the go way!". i just don't understand. the "the go way" feels more like a mantra that discourages critical thinking about programming language design.
> `ctx.Value` is an `any -> any`
It did not have to be this way, this is a shortcoming of Go itself. Generic interfaces makes things a bit better, but Go designers chose that dumb typing at first place. The std lib is full of interface {} use iteself.
context itself is an after thought, because people were building thread unsafe leaky code on top of http request with no good way to easily scope variables that would scale concurrently.
I remember the web session lib for instance back then, a hack.
ctx.Value is made for each go routine scoped data, that's the whole point.
If it is an antipattern well, it is an antipattern designed by go designers themselves.
Maybe he doesn't have a company because he is too dogmatic about things that don't really matter.
100%
People who have takes like this have likely never zoomed out enough to understand how their software delivery ultimately affects the business. And if you haven't stopped to think about that you might have a bad time when it's your business.
Someone has to question the status quo. If we just did the same things there would be a lot less progress. The author took the time to articulate their argument, and publish it. I appreciate their effort even if I may not agree with their argument.
Bingo. Everything that can be wrongly used or abused started out its existence within sane constraints and use patterns.
The author gave a pretty good reasoning why is it a bad idea, in the same section. However, for the demonstration purposes I think the they should have included their vision on how the request scoped data should be passed.
As I understand they propose to pass the data explicitly, like a struct with fields for all possible request-scoped data.
I personally don't like context for value passing either, as it is easy to abuse in a way that it becomes part of the API: the callee is expecting something from the caller but there is no static check that makes sure it happens. Something like passing an argument in a dictionary instead of using parameters.
However, for "optional" data whose presence is not required for the behavior of the call, it should be fine. That sort of discipline has to be enforced on the human level, unfortunately.
> As I understand they propose to pass the data explicitly, like a struct with fields for all possible request-scoped data.
So basically context.Context, except it can't propagate through third party libraries?
If you use a type like `map[string]any` then yes, it's going to be the same as Context. However, you can make a struct with fields of exactly the types you want.
It won't propagate to the third-party libraries, yes. But then again, why don't they just provide an explicit way of passing values instead of hiding them in the context?
> why don't they just provide an explicit way of passing values instead of hiding them in the context?
Hiding them in a context is the explicit way of passing values through oblivious third-party libraries.
In some future version of Go, it would be nice to just have dynamic scoping. But this works now, and it’s a good pattern. The only real issue is the function-colouring one, and that’s solvable by simply requiring that every exported function take a context.
Precisely because you need to be able to pass it through third party libraries and into callbacks on the other side where you need to recover the values.
Yeah most people talking here are unlikely to have worked on large scale Go apps.
Managing a god-level context struct with all the fields that ever could be relevant and explaining what they mean in position independent ways for documentation is just not scalable at all.
Import cycles mean you’re forced into this if you want to share between all your packages, and it gets really hairy.
We effectively use this approach in most of our go services. Other than logging purposes, we sometimes use it to pass stuff that is not critical but highly useful to have, like some request and response bodies from HTTP calls, tenant information and similar info.
This article is from 2017!
As others have already mentioned, there won't be a Go 2. Besides, I really don't want another verbose method for cancellation; error handling is already bad enough.
I thought go 2 was considered harmful
Yes, that's why you should instead use "COMEFROM", or it's more general form, "LET'S HAVE A WALK".
Oh, don't even start about Go's knack for being pithy to a fault.
I came here to say this.
Contexts in Go are generally used for convenience in request cancellation, but they're not required, and they're not the only way to do it. Under the hood, a context is just a channel that's closed on cancellation. The way it was done before contexts was pretty much the same:
Some compare context "virus" to async virus in languages that bolt-on async runtime on top of sync syntax - but the main difference is you can compose context-aware code with context-oblivious code (by passing context.Background()), and vice versa with no problems. E.g. here's a context-aware wrapper for the standard `io.Reader` that is completely compatible with `io.Reader`: For io.ReadCloser, we could call `Close()` method when context exits, or even better, with `context.AfterFunc(ctx, rc.Close)`.Contexts definitely have flaws - verbosity being the one I hate the most - but having them behave as ordinary values, just like errors, makes context-aware code more understandable and flexible.
And just like errors, having cancellation done automatically makes code more prone to errors. When you don't put "on-cancel" code, your code gets cancelled but doesn't clean up after itself. When you don't select on `ctx.Done()` your code doesn't get cancelled at all, making the bug more obvious.
You are half right. A context also carries a deadline. This is important for those APIs which don't allow asynchronous cancellation but which do support timeouts as long as they are set up in advance. Indeed, your ContextReader is not safe to use in general, as io.ReadCloser does not specify the effect of concurrent calls to Close during Read. Not all implementations allow it, and even when they do tolerate it, they don't always guarantee that it interrupts Read.
This works, but goes against convention in that (from the context package docs) you shouldn’t “store Contexts inside a struct type; instead, pass a Context explicitly to each function that needs it.”
It does seem an unnecessarily limiting convention.
What will go wrong if one stores a Context in a struct?
I've done so for a specific use case, and did not notice any issues.
> What will go wrong if one stores a Context in a struct?
Contexts are about the dynamic contour, i.e. the dynamic call stack. Storing the current context in a struct and then referring to it in some other dynamic … context … is going to lead to all sorts of pain: timeouts or deadlines which have already expired and/or values which are no longer pertinent.
While there are some limited circumstances in which it may be appropriate, in general it is a very strong code smell. Any code which passes a context should receive a context. And any code which may pass a context in the future should receive one now, to preserve API compatibility. So any exported function really should have a context as its first argument for forwards-compatibility.
This guidance is actually super important, as contexts are expected to be modified in a code flow and apply to all functions that are downstream of your current call stack.
If you store contexts on your structs it’s very likely you won’t thread them correctly, leading to errors like database code not properly handling transactions.
Actually super fragile and you should avoid doing this as much as is possible. It’s never a good idea!
True. But this code is only proof-of-concept of how non-context-aware functions can be wrapped in a context. Such usage of context is not standard.
Consider this:
Of course not - you're not handling the context at all in the called function. What's there to consider, reader.Read() has no idea about your timeout and value store intent. How would it, telepathy?
There are two solutions, depending on your real use case:
1) You're calling Read() directly and don't need to use functions that strictly accept io.Reader - then just implement ReadContext:
Otherwise, just wrap the ioContextReader with another ioContextReader:Changing the interface 1) is obviously not relevant.
Re-wrapping works only for the toy example. In the real world, the reader isn't some local variable, but there could be many, across different structs, behind private fields.
To cirle back, and not focus too much on the io.Reader example: the virality of ctx is real, and making wrapper structs is not a good solution. Updating stale references may not be possible, and would quickly become overwhelming. Not to forget the performance overhead.
Personally I think it's okay, go is fine as a "webservices" language. The go gospel is, You can have your cake and eat it too, but it's almost never true unless you twist the meaning of "cake" and "eat".
You're spawning a goroutine per Read call? This is pretty bonkers inefficient, to start, and a super weird approach in any case...
Yes, but this is just proof of concept. For any given case, you can optimize your approach to your needs. E.g. single goroutine ReadCloser:
Again, this is not to say this is the right way, only that it is possible and does not require any shenanigans that e.g. Python needs when dealing with when mixing sync & async, or even different async libraries.A mutex in a hot Read (or any IO) path isn’t efficient.
What would you suggest as an alternative?
[flagged]
Thank you for your helpful input.
No worries, this help also generalises to any time you want to add locking/synchronisation or other overheads to a hot code path.
It’s best not to acquire a mutex and launch a goroutine to read 3 bytes of data at a time.
Also, hot tip: you can like… benchmark this. It’s not illegal.
> It’s best not to acquire a mutex and launch a goroutine to read 3 bytes of data at a time.
io.Copy uses 32KB buffers. Other parts of standard library do too. If you're using Read() to read 3 bytes of data at a time, mutex is the least of your worries.
Since you seem to be ignoring sarcasm of my previous comment - just saying "don't do that" without suggesting an alternative way in the particular code context you're referring to, isn't useful at all. It's just annoying.
It may well use 32KB buffers, or any size, but that doesn’t translate to “reading 32KB at a time”.
If you’re aborting specifically an io.Copy, then there are better ways to do that: abort in the write path rather than the read path.
It’s not my job to provide you with alternative code. That’s your job.
> It’s not my job to provide you with alternative code. That’s your job.
It is not your job to tell me "that is wrong", yet you do it because it's easy. Suggesting an alternative (not necessarily providing the code) is less easy, so you don't wanna do it. That's fine. I just want you to be aware that the former without the latter is pretty much useless.
It is also easy to tell you that you’re wrong if you were to post that you use your forehead to hammer nails into a post.
Posting detailed instructions on how to identify, purchase and utilise a hammer isn’t something I need to do, and doesn’t negate the correctness of the initial “don’t do that”.
You've been missing the point of bheadmaster's posts, which (as it seems to me) was to show that "you can compose context-aware code with context-oblivious code (by passing context.Background()), and vice versa with no problems". Bheadmaster gave some proof of concept code showing how to do that. The code might be somewhat inefficient, but that doesn't invalidate the point. If you think there's a more efficient way to compose context-aware code with context-oblivious code, then the best way to make that case would be to explain how to do so.
> This probably doesn’t happen often, but it’s prone to name collisions.
It's funny, it really was just using strings as keys until quite recently, and obviously there were collisions and there was no way to "protect" a key/value, etc.
Now the convention is to use a key with a private type, so no more collisions. The value you get is still untyped and needs to be cast, though. Also there are still many older libraries still uses strings.
The blog post from 2014 introducing context uses a private key type, so there's really no excuse: https://go.dev/blog/context#package-userip
> It’s very similar to thread-local storage. We know how bad of an idea thread-local storage is. Non-flexible, complicates usage, composition, testing.
I kind of do wish we had goroutine local storage though :) Passing down the context of the request everywhere is ugly.
I like explicit over implicit. I will take passing down context (in the sense of the concept, not the specific Go implementation) explicitly everywhere over implicit ("put it somewhere and I'll trust I can [probably, hopefully] get it back later") any day of the week.
I've seen plenty of issues in Java codebases where there was an assumption some item was in the Thread Local storage (e.g. to add some context to a log statement or metric) and it just wasn't there (mostly because code switched to a different thread, sometimes due to a "refactor" where stuff was renamed in one place but not in another).
Most recently ive been bit by this with datadog. The Python version does some monkeypatching to inject trace info. The go version you need to inject the trace info explicitly. While the latter takes more setup, it was much easier to understand what was going on and to debug when we ran into issues.
Sounds very familiar. I was a Java developer for a long time, and in that ecosystem adding a library to your project can be enough for code to be activated and run. There are plenty of libraries where the idea is: just include it, magic stuff will happen, and everything works! That is, until it doesn't work. And then you have to try and debug all this magic stuff of how Java automatically loads classes, how these classes are created and run, and what they do. Didn't happen very often, but when it happened usually a full week was wasted with this.
I really prefer spending a bit more time to set it up myself (and learn something about what I'm using in the process) and knowing how it works, than all the implicit magic.
This is why I avoid Python. I started doing Go after looking for few solutions written and Python and I couldn’t use it.
Some magic values inside objects of recursive depth changing dynamically at the runtime. After working for some time with functional languages and languages with non-mutable structures I’m afraid of such features today.
Context is nice because it’s explicit. Even function header spills the detail. `GetXFromName(context.Context, string)` already says that this call will do some IO/remote call and might never return or be subject of cancellation.
now your stuff breaks when you pass messages between channels
Goroutines have a tiny stack at the beginning, 4KB iirc. Having a goroutine-local storage will probably open a can of worms there.
Context's spread just like exceptions do, the moment you introduce one it flies up and down all the functions to get where it needs to be. I can't help but think that local storage and operations for Go just like Threads have in Java would be a cleaner solution to the problem.
Contexts implement the idea of cancellation along with go routine local storage and at that they work very well.
What if for the hypothetical Go 2 we add an implicit context for each goroutine. You'd probably need to call a builtin, say `getctx()` to get it.
The context would be inherited by all go routines automatically. If you wanted to change the context then you'd use another builtin `setctx()` say.
This would have the usefulness of the current context without having to pass it down the call chain everwhere.
The cognitive load is two bultins getctx() and setctx(). It would probably be quite easy to implement too - just stuff a context.Context in the G.
Was this solved? Is this context only a cancellation flag or does it do something more? The obvious solution for a cancellation trigger would be to have cancellation as an optional second argument. That's how it's solved in e.g. C#. Failing to pass the argument just makes it CancellationToken.None, which is simply never cancelled. So I/O without cancellation is simply foo.ReadAsync(x) and with cancellation it's foo.ReadAsync(x, ct).
It's not just for cancellation and timeouts, it is also used for passing down metadata, but also for cross-cutting concerns like structured loggers.
Consider what happens in JavaScript when you declare a function as async. Now everything calling it is infected. Passing around runtime constructs like context in Go (AbortSignal in JS) or an allocator in Zig gives exactly the right level control back to the call and I love it. You can bail out of context propagation at any level of your program if that's your desire.
Needs a (2017)!
Yes, I was about to comment that “there won’t be a Go 2”, but I guess that wasn’t settled when the article was written.
as someone who's not in the community: why not?
The major features that may have required a 2.0 were implemented in a backwards-compatible way, removing the utility of a Go 2.0.
Go 2.0 was basically a blank check for the future that said "We may need to break backwards compatibility in a big way". It turns out the Go team does not see the need to cash that check and there is no anticipated upcoming feature in the next several years that would require it.
The last one that I was sort of wondering about was the standard library, but the introduction of math/rand/v2 has made it clear the devs are comfortable ramping standard library packages without a Go 2. There are a number of standard libraries that I think could stand to take a v2; there aren't any that are so broken that it's worth a v2 to hard-remove them. (Except arguably syscall [1], which turns out it doesn't belong in the standard library because it can't maintain the standard library backwards compatibility and should have been in the extended standard library from the beginning, but that's been the way it is now for a long time and also doesn't rate a v2.)
(And again let me underline I'm not saying all the standard library is perfect. There is some brokenness here and there, for various definitions of "brokenness". I'm just saying it's not so broken that it's worth a v2 hard break at the language level and hard elimination of the libraries such that old code is forcibly broken and forced to update to continue on.)
[1]: https://pkg.go.dev/syscall
To not repeat other's (Python) mistakes ;-)
The the introduction of Python 3 wasn't a mistake. The mistake was discontinuing Python 2.
Just look at how rust does it. Rust 1.0 code still works in the latest version of rustc, you just need to set the project to the Rust 2015 edition. You can even mix-and-match editions, as each crate can have a different edition. Newer versions of rustc will always support all previous editions, and breaking changes are only ever introduced when a new edition is released every 3 years. If the crate is stable, no real reason to upgrade, it will work forever. And if you do need to update a project, you can split it into multiple crates and do it incrementally.
Just imagine how much smoother the python 3 transition would have been if you could transition projects incrementally, module by module as needed.
It seems you are both saying the same thing. Had Python not introduced a line in the sand and instead continued to support Python 2 amid future updates there would have been no reason for Python 3. The Python 2 line could have kept improving instead.
Just as you say, Python could have introduced what is found in Python 3 without breaking Python 2 support. Which is the direction Go has settled on; hence why Go 2 is off the table. Go 1.0 and Go 1.23 are very different languages, but backwards version support is retained, so no need for a new major version.
No. The point of rust editions is that they do break support for older code, which is very different to what go has now settled on.
IMO, it's the best of both worlds. Old code continues to work forever, but your language design isn't held back by older design mistakes.
The trouble with the Rust community is that it is terrible at communication. That may be why you extend a presupposition that everyone understands the meaningful difference between Rust editions and Go version directives, but I can't tell a difference beyond the frivolous like syntax used. Based on the documentation of each they seem like the exact same concept, with the exact same goals in mind. As a result, unfortunately, your point is not yet made. Perhaps you can break the cycle and describe for every day people how Rust editions are fundamentally different?
Editions allow making breaking changes to Rust without splitting the ecosystem - no hassle caused to existing code unless it opts into the new edition and its breaking changes. There's currently editions 2015, 2018, 2021, and 2024. When a new edition is introduced, it can make breaking changes such as introducing new keywords, but every previous edition remains supported forever by newer compiler versions.
The key part is that editions are configured per-library - libraries A and B might use editions 2015 and 2021, and your application could use edition 2018 and depend on those libraries, and it works.
If you wrote a library with the original 2015 edition, and never upgraded it to deal with new `async` and `await` keywords added in the 2018 edition, that's totally fine. Newer compilers will continue to compile it in its configured edition=2015 mode, where the new keywords don't exist (so your local variable named `async` still compiles), and new code written against newer editions may still use this 2015 edition library with no issue.
Editions are different from Go version directives because you use them to say "my library needs features added in this Go version", but they don't enable Go to make breaking changes to the language.
Editions can't do every kind of breaking change however - they mostly work for syntax level changes, and don't work for things like tearing out regrettable parts of the standard library.
> The key part is that editions are configured per-library - libraries A and B might use editions 2015 and 2021
In what way is that key? It still reads as being the same as the Go version directive. Obviously there are some differences in the implementation. For example, Go puts it in go.mod, while Rust puts it in Cargo.toml, but at a conceptual level I fail to see any fundamental difference. As you describe it, and how the documentation describes it, they attempt to accomplish the same thing for the same reason.
But, as phire puts it, they are "very different". But I don't see how. The carrying on of the tradition of the Rust community being horrible at communication carries on, I'm afraid. As before, you are going to have to speak to those who aren't deep in the depths of programming languages. Dumb it down for the reader who uses PHP and who has never touched Go or Rust in their life.
> they don't enable Go to make breaking changes to the language.
What, exactly, do you mean? The change to loop variable semantics comes to mind that was clearly a breaking change to the language, but gracefully handled with the version directive. What purpose are you under the impression the directive serves if not for dealing with breaking changes?
The documentation is probably the best resource to start with the concept and how it works / what the goals are: https://doc.rust-lang.org/edition-guide/editions/index.html
For example, https://doc.rust-lang.org/edition-guide/rust-2021/warnings-p... - code that produced a lint warning in the 2018 edition produces a compiler error in the 2021 edition. That would be something that can't be done in a backwards compatible way without editions
Another example would be changes to the import syntax https://doc.rust-lang.org/edition-guide/rust-2018/path-chang... - the compiler will forever support the 2015 behavior in crates that use the 2015 edition, but crates using newer editions can use the newer behavior
As stated before, the documentation in both languages was already consulted. It did not clear up how Rust is any different than Go in this regard. Consider a simple example from the Go documentation: Support for numeric underscores, which was not a part of the original language and later included in a 'new edition' of Go.
Using the 1.13 or later version of the gc compiler, if your go.mod specifies anything after 1.12 the above compiles fine. But if go.mod asserts go 1.12 or earlier, you will get a compiler error from the above code as the compiler reverts to 1.12 or earlier behaviour based on the version directive. That sounds exactly like what you described! And, like I said before, Rust's documentation too echoes to my read that editions accomplish basically the same thing and exist for the same reason Go version directives exist.But the earlier commenter indicated that they are very different. So, unfortunately, you have again failed to break the cycle. We need something dumbed down for us regular people, not something directed at those who walk, talk, and sleep Rust.
Sorry to have disappointed you. I don't walk, talk, or sleep either Rust or Go, but was trying to provide some resources to help in case you hadn't seen them yet.
One difference I noticed in the docs is in the Go Reference it says the "go" line of the "go.mod" has to be greater than or equal to the go line of all that modules dependencies, if the go line is 1.21 or higher, so a module for 1.21 can't depend on a module for 1.22 [1]
That restriction doesn't apply for Rust, a library using the 2015 edition can use a dependency that uses the 2018 edition, for example.
That's just one difference I noticed in the implementation. The goals seem very similar if not the same
[1] https://go.dev/doc/modules/gomod-ref#go
Thanks for trying. But it is the "which is very different to what go has now settled on" that we are trying to get to the bottom of. It appears from your angle that you also conclude that Go has settled on the very same thing, frivolous implementation details aside. Hopefully phire will still return to dumb it down for us.
Ok... the only thing that go version directives do is selectively enable new features. Essentially, they are only really there to help you ensure your code will continue compiling in older versions of the compiler. Hell, until recently it wouldn't even throw an error if you tried to compile code with a future version directive.
The actual backwards compatibility in go is achieved by never removing functionality or syntax. New versions can only ever add new features/syntax. If there was a broken function in the API, that function needs to stick around forever, and they will be forced to add a second version of that function that now does the correct thing.
So, you can take code written for go 1.0, slap on a "go 1.23" directive and it will compile just fine. That's the guarantee that go provides. Well, mostly. There are a few examples of go 1.0 code that doesn't compile anymore, even when you use a "go 1.0" directive.
But not being able to remove anything ever is limiting.
A good example of how this can be limiting is reserved keywords. Go has a fixed set of reserved keywords that they picked for 1.0, and they can never reserve any more, any code using them as identifiers will break. Any new feature needs to be carefully designed to never need a new reserved keyword. Either they reuse an existing keyword (which c++ does all the time), or they use symbols instead.
But rust can reserve new keywords. The 2018 edition of rust reserved "async" and "await" for future async functionality and "try" for a potential try block. Rust did reserve "yield" from the start for generators, but decided they needed a way to mark a function as a generator, so in the 2024 edition, "gen" is now a reserved keyword, breaking any code that uses gen as a function/variable name.
Do note that rust also follows the go strategy within an edition. There is only one new edition every three years, and it would be a pain if all new features had to wait for a new edition. So the async/await keywords were reserved in the 2018 edition, but didn't actually get used until the end of 2019.
This means that just because your rust version supports the 2018 edition, doesn't mean it will compile all 2018 code. The editions are just for breaking changes, and there is a seperate minimum "rust-version" field that's somewhat equivalent to go's "go 1.x" directive. Though, "rust-version" doesn't disable features, it's just there to provide a nice clean warning to users on old compilers. Ideally in the future it will gain the ability to selectively disable language features (as rust already has extensive support for selectively enabling experimental language features in nightly builds, which we haven't even talked about here).
Basically, rust editions allow all breaking changes to be bundled up and applied once every three years. They also provides a way for compilers to continue supporting all previous editions, so old code will continue to work by picking an old version. While go's version directive looks superficially similar to editions, it is there for a different reason and doesn't actually allow for breaking changes.
> The actual backwards compatibility in go is achieved by never removing functionality or syntax.
The previous version removed functionality related to loop variables, so that is not strictly true. You might be right that the project doesn't take change lightly. There has to be a very compelling reason to justify such change, and why would it be any other way? If something isn't great, but still gets the job done, there is no reason to burden developers with having to learn a new language.
Go is not exactly the most in-depth language ever conceived. There is not much functionality or syntax that could be removed without leaving it inoperable. But there is no technical reason why it couldn't. The mechanics to allow it are already there, and it doesn't even violate the Go 1 guarantee to do so under the operation of those mechanics.
So, sure, it is fair to say that there is a social reason for Go making as few breaking/incompatible changes as is possible, but we were talking about the technology around allowing breaking/incompatible changes to co-exist and how the same concept could have been applied to Python. In Rust community fashion, I am not sure you have have improved on the horrible communication. We recognized that there is some difference in implementation details right from the onset, but the overall concept still seems to be the same in both cases to me.
Again, we need it dumbed down for the every day average person. Your audience doesn't eat monads for breakfast like your expression seems to believe.
> Go 1.22, for example, removed functionality related to loop variables, so that is not strictly true.
Ah, interesting. I searched but I couldn't find any example of go actually making a breaking change. Rust has a massive document [1] documenting every single breaking in single document. With go you kind of have to dig through the release notes of each version.
So, maybe Golang are relaxing their stance slightly on backwards compatibility, now that they have a mechanism that does kind of work. Which is good, I encurage that. But their offical stance is still that most code from go 1.0 should work without issues.
> there is no reason to burden developers with having to learn a new language.
To be clear, many of the breaking changes in Rust editions are the same kind of thing as that go loop example. Edge cases where it's kind of obvious that should have always worked that way, but it didn't.
The average programmer will barely notice the changes between editions, they won't have to re-learn anything. The major changes to the language usually come in regular feature releases; They are optional additions, just like go.
While the edition mechanism could be used to make changes as big as Python 3, so far it hasn't and I don't think it ever will. Decent chance there will never be a "rust 2" either.
> In Rust community fashion, I am not sure you have have improved on communicating the difference
Sigh... you aren't wrong:
----------
The difference is more attitude than anything else.
Go strives to never have breaking changes, even if they are forced to bend that rule sometimes. They do have a mechanism that allows breaking changes, but seems to be a reasonably recent innovation (before go 1.21, it wouldn't even error when encountering a future go version directive, and they didn't use it for a breaking change until 1.22)
Rust accepts that it needs to do breaking changes sometimes, and has developed mechanisms to explicitly allow it, and make it as smooth as possible. And this mechanism works very well.
----------
BTW, I'm not even saying go's stance is wrong. Rust needs this mechanism because it's a very ambitious language. It never would have reached a stable 1.0 unless it recognised the need for breaking changes.
Go is a much simpler language, with a different target market and probably should be aiming to minimise the need for breaking changes.
My original point is that "never make breaking changes" is the wrong lesson to take away from Python 3. And that Rust's editions provide a very good example of how to do breaking changes correctly.
[1] https://doc.rust-lang.org/edition-guide/editions/
> My original point is that "never make breaking changes" is the wrong lesson to take away from Python 3. And that Rust's editions provide a very good example of how to do breaking changes correctly.
Here we go again, but the point of editions, as far as I can tell, is so that there are no breaking changes. A value Go also holds. As a result, both projects are still at version 1 and will likely always forever be at version 1.
So, if we round back to the start of our discussion, if Python 2 had taken the same stance, there would never be a Python 3. What we know of as Python 3 today would just be another Python 2 point release. Which is what the earlier commenter that started all this was saying – Go will not move to Go 2, and Rust won't move to Rust 2, because nobody wants to make the same mistake Python did.
I understand you have an advertising quota to fill, but introducing Rust into the discussion was conversationally pointless.
I'd say Rust editions are more like going from Python 2.x to Python 2.y, than the 2->3 migration. The Rust standard library is still the same (and the string type is still "valid UTF-8") no matter the edition (this is why you can mix-and-match editions), the edition differences are mostly on the syntax.
> Just imagine how much smoother the python 3 transition would have been if you could transition projects incrementally, module by module as needed.
That would require manually converting strings to the correct type at each module boundary (you can't do it automatically, because on the Python 2.x side, you don't know whether or not a string has already been decoded/encoded into a specific character encoding; that is, you don't know whether a Python 2.x string should be represented by a "bytes" or a "str" on the Python 3.x side). That's made even harder by Python's dynamic typing (you can't statically look at the code and point all the places which might need manual review).
> First things first, let’s establish some ground. Go is a good language for writing servers, but Go is not a language for writing servers. Go is a general purpose programming language, just like C, C++, Java or Python
Really? Even years later in 2025, this never ended up being true. Unless your definition of 'general purpose' specifically excludes anything UI-related, like on desktop, web or mobile, or AI-related.
I know it's written in 2017, but reading it now in 2025 and seeing the author comparing it to Python of all languages in the context of it's supposed 'general purpose'ness is just laughable. Even Flutter doesn't support go. granted, that seems like a very deliberate decision to justify Dart's existence.
It is not.
Link to previous discussion: https://news.ycombinator.com/item?id=14958989
> https://golang.org/doc/faq#What_is_the_purpose_of_the_projec...: "By its design, Go proposes an approach for the construction of system software on multicore machines."
> That page points to https://talks.golang.org/2012/splash.article for "A much more expansive answer to this question". That article states:
> "Go is a programming language designed by Google to help solve Google's problems [...] More than most general-purpose programming languages, Go was designed to address a set of software engineering issues that we had been exposed to in the construction of large server software."
In an alternative timeline, had Rust 1.0 been available when Docker pivoted away from Python into Go, and Kubernetes from Java into Go, due to having Go folks pushing for the rewrite, and most likely they would have been taken by RIIR instead, nowadays spreading across Python and JavaScript ecosystem, including rewriting tools originally written in Go.
Nope. Rust is not a good tool for servers. It's downright terrible, in fact. Goroutines help _a_ _lot_ with concurrency.
Go tell that to Amazon, Facebook and Microsoft.
> Unless your definition of 'general purpose' specifically excludes anything UI-related, like on desktop, web or mobile, or AI-related.
By that definition no language is general purpose. There is no language today that excels in GUI (desktop/mobile), web development, AI, cloud infrastructure, and all the other stuff like systems, embedded...And all at the same time.
For instance I have never seen or heard of a successful Python desktop app (or mobile for that matter).
I think the whole argument here is silly, but I do know kitty (terminal) and Calibre (ebook manager) are two rather popular cross platform python desktop apps.
> If the Go language ever comes to the point where I’d have to write this
> put a bullet in my head, please.Manually passing around a context everywhere sounds about as palatable as manually checking every return for error.
Exactly, the snippets needs at least three lines of inane error checking boilerplate and variable juggling.
Discussed at the time:
Context should go away for Go 2 - https://news.ycombinator.com/item?id=14951753 - Aug 2017 (40 comments)
I find "CancellationToken" in VSCode extension APIs quite clear and usable, and not overly complicated. Wonder if anyone has done a conparison of Go's context and CancellationToken.
Yeah, .NET developers have been passing CancellationTokens around in the places where they have needed them for 15 years. The tokens are basically invisible until their existence emerges when someone decides they want to cancel a long-running API call or something. At that point, they are plumbed as deeply as seems fit for the problem at hand and then hardly thought about ever again. CancellationTokens are generally a delightful pattern, especially when the language allows sensible defaults.
> If you use ctx.Value in my (non-existent) company, you’re fired
What a nice attitude.
Context is useful in many cases. In go I have to pass ctx from func to func. In nodejs I can easily create&use context by using AsyncLocalStorage (benefit of single-thread).
> If you use ctx.Value in my (non-existent) company, you’re fired
I was unsuccessful to convey the same message in my previous company (apart from being fired part). All around the codebase you'd see function with official argument and unofficial ones via ctx that would panic everything if you forgot it was used 3 layers down (not kidding). The only use case I've seen so far that is not terrible of context value is if you have a layer of opentelemetry as it makes things transparent and as a caller you don't have to give a damn how the telemetry is operated under the hood.
new solution should be: Simple and elegant. Optional, non-intrusive and non-infectious. Robust and efficient. Only solves the cancelation problem.
okay... so they dodged the thing I thought was going to be interesting, how would you solve passing state? e.g. if I write a middleware for net/http, I have to duplicate the entire http.Request, and add my value to it.
> If you use ctx.Value in my (non-existent) company, you’re fired
Yeah, okay. I tried to find reasons you'd want to use this feature and ultimately found that I really, really dislike it.
there's no Go 2
"Go 2 Considered Harmful"
I agree so strongly with this piece. Go’s context lib is essential, confusing, functional, and should be handled at the language level, but like this author I also have no ideas for what the design should be.
2017!!!