

It seems like the more interesting thing is bypassing CFI protections by abusing coroutines to jump around instead of inserting jumps to other functions.
It seems like the more interesting thing is bypassing CFI protections by abusing coroutines to jump around instead of inserting jumps to other functions.
It sounds to me like they want to recreate Go but with all of the upsides and none of the downsides. Pretty good goal. I think I’ll give it another look now that it’s been a while.
Thanks for the overview!
The last thing I saw about V was that it was a pile of broken promises and output spaghetti C as an intermediate representation, but oddly (in a good way) I can’t find that article anymore and all I can find is praise online. Maybe it’d be worth giving it another look now.
I’ve consulted in the software dev teams of dozens of major multinationals and the projects were always, without exception, some variant on “how can we replace people” or “how can we reduce costs by doing something slightly worse”.
Always might be an overstatement, but this has been true over the past couple years for myself and the people I know at these companies. Especially right now - upper management seems to be deluded into thinking that LLMs can do anything, or more likely, they’re just trying to sell hype like everyone else just to raise the stock price.
It’s extremely immature and only has a few examples. I can’t find a reference or any real form of documentation either, though I’m sure it exists somewhere.
If you’re looking for an “efficient” programming language (you’ll probably need to define that further but I’m assuming output size and compile speed), both Go (which seems to inspire this project) and Zig come to mind.
My job has AI usage as an objective as well. It’s ridiculous. If a tool will make my job easier, then I’ll be the one to tell you, and I’ll be the first person advocating for it. The people in charge aren’t doing my job, so they can fuck off with the micromanagement.
Ðis blames ðe wrong application. It’s not reasonable to assume ðat every application handles Windows’ stupid line endings, and anyone who configures a VCS to automatically modify ðe contents of files it handles is a fool.
Many tools convert on checkout by default. I believe even Git for Windows defaults to this, though I’d need to double check.
The correct solution here is to use a .gitattributes
file and renormalize the line endings. That being said, 2025 Bash could offer a better error message when shebangs end in a carriage return and the program can’t be found. I’ve run into that enough at work to know what that error is.
I usually use whatever the formatter does by default. Most are reasonable by default, and I might prefer things a different way personally, but I find that my job is easier when everyone else’s code is checked by CI to conform to the formatter’s style rather than being an unreadable mess of random newlines and weird mismatched indentation.
The guideline that I follow is that if a tool doesn’t enforce a rule before the code can be merged, then it’s not a rule. Everyone, myself included (if I’m being particularly pressured on a feature), will overlook it at some point, whether intentional or not.
For early returns, I think most reasonably configurable formatters support optional braces in those cases. This of course is assuming that’s a thing in your language, since many don’t have a concept of one-line unbraced returns (Python doesn’t use braces, Rust always expects them, etc). For consistency and just to have a rule, I usually just brace them because I know everyone will if it’s enforced rather than it varying person-to-person.
My point is that the colors make so little sense that the only thing that makes sense is to decouple the colors from any meaning.
This table is so lacking and flawed that I don’t even know why there’s so much discussion around it.
As far as I can tell, the table is purely informational and not advocating for any features as being positive or negative.
Otherwise, yeah the colors make no sense.
For tool configs? I’m not really sure I follow. All my source code for the project goes in src/
or some other subdirectory, and the project root is a bunch of configs, a source directory, maybe some scripts, etc. It’s never really bothered me.
What has bothered me is __pycache__
directories. Whoever decided to litter those in every source directory all over the place… let’s just say I hope they learned to never do that again. I deal with enough trying to get Python to work at all (with the absolute hell it is to get imports working correctly, the random, but admittedly mostly documented, BS gotchas littered all over the standard library, packages with poor docs, no types, and every function worth calling taking **kwargs
, etc). Seeing my code littered with these directories isn’t something I really want to deal with as well.
A standard for build output might make sense to me. Maybe just throw cache stuff in .cache
and build output to .build
(with intermediate artifacts in there as well potentially). For configs, I wouldn’t really complain about it all going in .config
, but it also doesn’t matter much to me, and sometimes you end up having nested configs anyway in nested project dirs (thinking of eslint configs, gitignores, etc).
Depends on what you need to match. Regex is just another programming language. It’s more declarative than traditional languages though (it’s basically pattern matching).
Pattern matching is something I already do a lot of in my code, so regexes aren’t that much different.
Regardless, the syntax sucks. It takes some time to get familiar with it, but once you get past that, it’s really simple.
To me, it seems like this article is overemphasizing code duplication as problematic. If multiple types of searches use some of the same fields, it’s okay to just copy them to each search type that uses them. This also allows each search type to be independently updated later on to add additional fields or deprecate existing fields without affecting other search types.
Fields that should always exist together should probably be moved to a struct containing those fields, if there’s some concept that encapsulates them. Paging fields, for example, that exist only on two of three variants can just live in their own struct, and those two variants can have fields of that type.
Code duplication is only really problematic when all duplicates need to be updated together every time. That does not seem to be the case here.
The distribution is super important here too. Hashing any value to zero (or h(x) = 0
) is valid, but a terrible distribution. The challenge is getting real-world values hashed in a mostly uniform distribution to avoid collisions where possible.
Still, the contents of the article are useful even outside of hashing. It should just disclaim that the width of the output isn’t the only thing important in a hash function.
Had this happen before with pattern matching.
Because you created a first draft. Your first draft should include all that info. It isn’t writing the whole doc for you lol, just making minor edits to turn it from notes into prose.
Without that? No clue, good luck. They can usually read source files to put something together, but that’s unreliable.
This would infuriate me to no end. It’s literally the definition of a data race. All data between threads needs to either be accessed through synchronization primitives (mutexes, atomic access, etc) or needs to be immutable. For the most part, this should include fds, though concurrent writes to stderr might be less of an issue (still a good idea to lock/buffer it and stdout though to avoid garbled output).
The main value I found from Copilot in vscode back when it first released was its ability to recognize and continue patterns in code (like in assets, or where you might have a bunch of similar but slightly different fields in a type that are all initialized mostly the same).
I don’t use it anymore though because I found the suggestions to be annoying and distracting most of the time and got tired of hitting escape. It also got in the way of standard intellisense when all I needed was to fill in a method name. It took my focus away from thinking about the code because it would generate plausible looking lines of code and my thinking would get pulled in that direction as a result.
With “agents” (whatever that term means these days), the article describes my feelings exactly. I spend the same amount of time verifying a solution as I would just creating the solution myself. The difference is I fully understand my own code, but I can’t reach that same understanding of generated code as fast because I didn’t think about writing it or how that code will solve my problem.
Also, asking an LLM about the generated code is about as reliable as you’d expect on average, and I need it to be 100% reliable (or extremely close) if I’m going to use it to explain anything to me at all.
Where I found these “agents” to be the most useful is expanding on documentation (markdown files and such). Create a first draft and ask it to clean it up. It still takes effort to review that it didn’t start BSing something, but as long as what it generates is small and it’s just editing an existing file, it’s usually not too bad.
This depends. Many languages support 1 liner aliases, whether that’s using
/typedef
in C++, type
in Rust, Python, and TS, etc.
In other languages, it may be more difficult and not worth it, though this particular example should just use a duration type instead.
The same holds true for C++20’s modules, which are really cool! Except you can’t really use them because compilers don’t fully support them yet.