i've been implementing my wiki system in haskell... types are /far/ too rigid in many circumstances.
to link code from two libraries together, i had to serialize html to string and reparse it with another library, all because their html asts - though the same - used incompatible types of IO.
still going to stick with haskell, because it's the best way to work with pandoc... but it's difficult to trust for serious software development while several libraries work like this.
on the plus side, i can be relatively confident in the code if it typechecks... but this is no replacement for just running the code and correcting afterwards
haskell's laziness means it can't reap the optimization benefits of strict type systems either - such a silly language!
while it's fun to prove things in types, this is not the way to go to just get things done.
@jakeisnt huh! what about the laziness stops it from being optimised? i would have thought that would make it *more* optimisable since a compiler gets more options on when it could run code.
@zens @jakeisnt yeah you can't do superoptimization on a strict language because a strict language definitionally has extra guarantees about execution time and ordering. i think jake is probably just not using the right abstraction for the job, which is admittedly the cause of a lot of inefficiency when working with more expressive tools
@zens @jakeisnt i'm not an expert on haskell but lazy languages have potential for optimization that is actually confusingly/surprisingly effective, such as evaluators based on interaction nets and Lamping etc. proposals for rewriting rules that give the appearance of negative complexity for some functions when fusion occurs
GHC is fairly conservative from a theoretical perspective, mostly because it doesn't really need to be faster and is quite complex
@zens @jakeisnt it leaves a lot on the table in a way that isn't mandated by the language at all, it's just that GHC already has a lot of involved optimization systems. those could probably be replaced in many ways with supercompilation by evaluation plus superoptimization at an evaluation level. I'm saying that just based on a vague understanding of Haskell's semantics and the received knowledge that GHC is complicated, but I presume such a more elegant compiler would basically be clean-sheet
@zens @jakeisnt i've seen @alcinnz comment on GHC's design at some length, and they could probably weigh in on this. However, GHC is honestly good enough in my experience, and while there's some fiddling necessary at times to ensure optimizations don't get missed due to the fact that there isn't really an overarching formalized way that they're implemented with demonstrable reliability, generated code generally has performance more than good enough for real-world applications.
My understanding is that the main bottleneck preventing GHC from doing more optimizations is that it's only compiling & optimizing a single file at a time BEFORE linking. It's pretty easy for GHC to determine where laziness isn't desired, and there's ways to turn off laziness if you need to.
haskell also has guaranteed evaluation order, but doing this at the call site rather than when parameters are passed to closures means (to me, not an expert by any means) that the arguments have to be evaluated at every single call site of every function
@jakeisnt @syntacticsugarglider hmmm i thought that the elimination of global scope/side effects would mean that it’s safe to do term rewriting in those instances, ahead of time if the compiler needs to. in languages like C guaranteed execution order has made it harder to optimise on newer cpu architectures that do out of order execution
@syntacticsugarglider @zens unfortunately, there isn't actually elimination of global side effects so this isn't true. consider the error function and how much use it still sees in several common libraries!
i wouldn't be surprised if ghc is somehow able to perform a lot of additional optimization in code without things like error though
@jakeisnt @syntacticsugarglider not that i am an expert but the order of execution issues in c optimisation that i have seen mostly refer to things like postfix increment operator- and the subtlety of what value it should evaluate to when; or the vagaries of declarations and assignments- all the little things that must happen in the right order to produce a deterministic value.
@zens i don't know too much about ghc, but my intuition would be that the combination of laziness and heavy reliance on closures means that it's nearly impossible to inline - as none of the closures arguments can be evaluated until reaching the final call site
Revel in the marvels of the universe. We are a collective of forward-thinking individuals who strive to better ourselves and our surroundings through constant creation. We express ourselves through music, art, games, and writing. We also put great value in play. A warm welcome to any like-minded people who feel these ideals resonate with them.