i've been implementing my wiki system in haskell... types are /far/ too rigid in many circumstances.

to link code from two libraries together, i had to serialize html to string and reparse it with another library, all because their html asts - though the same - used incompatible types of IO.

still going to stick with haskell, because it's the best way to work with pandoc... but it's difficult to trust for serious software development while several libraries work like this.

on the plus side, i can be relatively confident in the code if it typechecks... but this is no replacement for just running the code and correcting afterwards

haskell's laziness means it can't reap the optimization benefits of strict type systems either - such a silly language!

while it's fun to prove things in types, this is not the way to go to just get things done.

@jakeisnt huh! what about the laziness stops it from being optimised? i would have thought that would make it *more* optimisable since a compiler gets more options on when it could run code.

@zens @jakeisnt yeah you can't do superoptimization on a strict language because a strict language definitionally has extra guarantees about execution time and ordering. i think jake is probably just not using the right abstraction for the job, which is admittedly the cause of a lot of inefficiency when working with more expressive tools

@syntacticsugarglider @jakeisnt the impression i got, though it might be wrong, is the main reason haskell isn’t optimised very much is it just isn’t popular enough to merit the effort- not a technical limitation

@syntacticsugarglider @jakeisnt corporate sponsorship makes a big difference to availability of effort

@zens @jakeisnt i'm not an expert on haskell but lazy languages have potential for optimization that is actually confusingly/surprisingly effective, such as evaluators based on interaction nets and Lamping etc. proposals for rewriting rules that give the appearance of negative complexity for some functions when fusion occurs
GHC is fairly conservative from a theoretical perspective, mostly because it doesn't really need to be faster and is quite complex

@zens @jakeisnt it leaves a lot on the table in a way that isn't mandated by the language at all, it's just that GHC already has a lot of involved optimization systems. those could probably be replaced in many ways with supercompilation by evaluation plus superoptimization at an evaluation level. I'm saying that just based on a vague understanding of Haskell's semantics and the received knowledge that GHC is complicated, but I presume such a more elegant compiler would basically be clean-sheet

@zens @jakeisnt i've seen @alcinnz comment on GHC's design at some length, and they could probably weigh in on this. However, GHC is honestly good enough in my experience, and while there's some fiddling necessary at times to ensure optimizations don't get missed due to the fact that there isn't really an overarching formalized way that they're implemented with demonstrable reliability, generated code generally has performance more than good enough for real-world applications.

@syntacticsugarglider @zens @jakeisnt Yeah, here's my page on the subject:

My understanding is that the main bottleneck preventing GHC from doing more optimizations is that it's only compiling & optimizing a single file at a time BEFORE linking. It's pretty easy for GHC to determine where laziness isn't desired, and there's ways to turn off laziness if you need to.

@alcinnz @syntacticsugarglider cool! sorry if this feels like a pile on @jakeisnt , not my intent- when there’s something that contradicts my view of the world i like to find out if i am wrong. if there’s an argument that laziness makes optimisation harder i’d like to read it.

@zens @alcinnz @syntacticsugarglider not a problem, i'd rather take some criticism and learn something than take away nothing at all : )

@syntacticsugarglider @zens those extra guarantees about execution time are precisely why it's easier to optimize those languages, right?

haskell also has guaranteed evaluation order, but doing this at the call site rather than when parameters are passed to closures means (to me, not an expert by any means) that the arguments have to be evaluated at every single call site of every function

@jakeisnt @syntacticsugarglider hmmm i thought that the elimination of global scope/side effects would mean that it’s safe to do term rewriting in those instances, ahead of time if the compiler needs to. in languages like C guaranteed execution order has made it harder to optimise on newer cpu architectures that do out of order execution

@zens @jakeisnt yep, you can freely memoize and pre-evaluate stuff. There are no execution order guarantees afaik, just monadic sequencing (IO basically creates an artificial functional dependency across operations to guarantee their sequencing)

@syntacticsugarglider @zens unfortunately, there isn't actually elimination of global side effects so this isn't true. consider the error function and how much use it still sees in several common libraries!

i wouldn't be surprised if ghc is somehow able to perform a lot of additional optimization in code without things like error though

@syntacticsugarglider @zens also, there is no guaranteed execution order for arguments to c functions in the c specification. (however, most compilers do evaluate arguments to functions from last to first)

@jakeisnt @syntacticsugarglider not that i am an expert but the order of execution issues in c optimisation that i have seen mostly refer to things like postfix increment operator- and the subtlety of what value it should evaluate to when; or the vagaries of declarations and assignments- all the little things that must happen in the right order to produce a deterministic value.

@zens i don't know too much about ghc, but my intuition would be that the combination of laziness and heavy reliance on closures means that it's nearly impossible to inline - as none of the closures arguments can be evaluated until reaching the final call site

Sign in to participate in the conversation

Merveilles is a community project aimed at the establishment of new ways of speaking, seeing and organizing information — A culture that seeks augmentation through the arts of engineering and design. A warm welcome to any like-minded people who feel these ideals resonate with them.