So I feel like everyone is reallll fed up with the status quo of digital electronics regarding efficiency, simplicity, learnability, (list probably goes on).
I have two questions on my mind:
1.) Does it just seem like that? Is it the programmers in my bubble becoming old and grump, or is there really something brewing? Do we have enough powder for the computing revolution of the 2020s?
2.) Who is already out there tackling this problem, and at which level? I fear we may have to go very deep and compromise intermittently to improve. What projects are out there, with vision and roadmaps?
Yup, I think you nailed it here, especially with the last part. I think somehow whatever comes next *has* to break compatibility if it wants to be better, but maybe there needs to be a mid-term plan for coexisting. One of the biggest pain-points for sure is supporting the web; without a browser you can't convince a single end-user.
Since reimplementing an OS and a Browser from scratch and better is essentially impossible, I have only two ideas:
- Build a whole alternate device formfactor that just doesn't web. Back to the personal-personal computer or something, a machine for hacking.
- Build something lightweight that has the ability to run bare-metal, but also virtualized, and have it integrate A+ with existing operating systems, so users can use both until the current web is obsolete.
@s_ol @technomancy the web should be abandoned and relinquished to the capitalists. fundamentally this technology is inadequate to achieve its goals and must also be re-architected. we could have such an amazing hypermedia web but unfortunately projects by Tim Berners-Lee, Semantic Web and Solid, won't succeed because they rely on backwards compatibility in a hostile and chaotic space.
@s_ol Guix is one project that I feel strikes a good balance between doing something radically different (and imho better) while also staying reasonably compatible with the existing Linux ecosystem.
And they are seriously targeting the Hurd, and it could be a great foundation for other projects. The community is also pretty cool!
It also tackles bootstrapping.
@s_ol Also, I'm only in my twenties and didn't start programming until I was like, idk, 16? So I'm not just old and grumpy. But I did read a lot about old tech and I hang around some people who have seen and experienced the old ways as well as the new.
When I started out, I was all starry eyed about Emscripten. Now, I can't wait to delete all modern browsers from my machine.
@s_ol I am researching this problem and incubating a project to replace the whole computing stack. unfortunately, I don't have much to show at the moment.
yes to the computing revolution of the 2020s!
I would be excited to hear more as soon as you have something!
Maybe you could share what level of the stack you are branching off at?
silicon / kernel / userspace / browser
@s_ol branching off at the silicon level to avoid proprietary and user-hostile technologies. but really branching off philosophically because the lambda calculus is a safer way to formalize computing abstractions.
it's a great question and I am glad you asked it. there is sincere sentiment especially in the fediverse that modern computing is fundamentally broken. people are attacking this problem from all different angles and it's amazing, fun and beautiful.
@s_ol Myself, I'm tackling The Web's complexity! And arguing that due to it we lost something beautiful, that is needed not only for accessibility but also the diversity of new devices.
And @akkartik has done some interesting stuff regarding programming languages, very low level!
As for 1) the best I can answer is: I'm 25, and have been programming for about 15 years.
In the Mu paper, you seem to be building up to be able to host a C kernel. Is this a self-imposed scope limitation to focus on the foundation first? Do you have some plans, thoughts or dreams for how Mu might grow up to the user level in the future?
@s_ol Mu is actually hosted _by_ a C kernel right now. That's a compromise I hope to undo over time.
The dream for the future is to be able to start writing interesting programs, while preserving two properties:
a) Source present, rebuilt on demand.
b) Drill down into the trace for any command on the commandline.
c) Turn any sequence of commands into a reproducible test.
Major open questions: graphics, forking hypothetical file systems.
@alcinnz thank you so much for putting us together!
@s_ol Since you asked about visions and roadmaps:
I want The Web to work great on any I/O medium! And I try to balance being compatible with existing websites against doing something new.
Currently I'm implementing an auditory web browser "Rhapsode", with hopes/plans to tackle print then smart TVs. Or at finer-grained level I'm currently tackling in-page navigation & SpeechDispatcher integration, and after that I'll tackle links, voice recognition, and then forms.
@s_ol Oh, and you'd be interested to know: the geospatial standards organization "The OGC" are working on new models for representing locations on earth called "Discrete Global Grid Systems".
They're really trivial to implement & render! But there's been challenges in getting buy-in from the broader industry.
And yes I've been working on this as well.
@s_ol Speaking for myself, yeah, I totally turned grumpy in the last 6 months, I've had a total breakdown when it comes to technology. I might never recover. I have not yet found anyone working in the direction that you mean, I'm actively looking. If I find anything, I will report back.
@s_ol @lonnon @neauoire I think the issues are more related to the way we consume tech (should we even do that?) than tech choices themselves. The technological objects became objects of power for industries and dev -I think we are part of the problem-, and objects of desire/placebos, users user and consumers. This is bad because it hijacks the perspective of designs from function to seduction. I think the emergency of time doesn’t need seduction. @kensanata
@lonnon @s_ol @neauoire It strikes me that the economics of all of this are really important. Developer salaries create a *desirable* barrier to entry. The more wonky your stack gets, the more easily justified a salary seems to feel. With many of the worlds highest earners coming from tech, it self-perpetuates myths of complexity, but also the desire to own "the next big thing", exacerbating the number of options, especially at higher levels of the stack with the steady increase in developers.
@lonnon @s_ol @neauoire palm pilot is objectively better. I have a list of points. only real problem with it is it needs to sync with a mothership and that mothership needs to be running an old operating system. i won’t get into it here but i could talk for hours avout why it is better. but if you want an idea of where i am coming from you can check my list of ui principles pinned to my tooterfall.
@lonnon @s_ol @neauoire as to the original poster question: things are getting worse. in the 1970s and 1980s ui designers learned a lot hard lessons at xerox and apple. they wrote these down. everyone who gave a shit about those lessons were ousted from apple after jobs died. facebook and microsoft never cared much in the first place. if you take the list of lessons you can use it to review UI’s like a checklist. i have a version of the checklist. fewer and fewer points every year check out.
@lonnon @s_ol @neauoire more and more actual expertise is being ignored in favor of an analytics driven microoptimisation approach, the goal of which isn’t to help you get work done, like the bicycle for the mind. the goal is single minded improvement of “engagement”, often taking the form of emotional manipulation and distraction tactics, the users are the product yadda yadda, but this is crystal clear in the design of a palm pilot vs. an iphone or android.
My sense is that this isn't just about labor markets or capitalism. By this time, Dijkstra's structured programming turn had been normative for an entire generation.
Per Mills (1986), programming had gone from "writing computer instructions" to a "mathematics based activity". These "fundamentally altered human expectations" informed the decades that followed.
@zensaiyuki @lonnon @s_ol @neauoire There's an anthroplogist, Annemarie Mol, who last I touched base with was doing research into episteme of eating, her site of study being how eating is articulated (whether as "nutrition" or "meal") in nursing homes. What she's finding is that the episteme that organizes (to relate to Foucault, that "orders") eating has profound (and counterintuitive) effects on outcomes for nursing home residents.
Shitty UI is a product of institutionalized thinking.
I see this, but it's talking about user UI. For users, yes, it's possible to pave over whatever is underneath, and it's absolutely possible to build *anything* on the current stack. There might be drawbacks and inefficiencies, but it will be faster than building up from the ground for sure.
The problem, in my eyes, is that programmer UX matters. And we cannot pave over that with more layers I think (that's what no-code startups are trying though)
@s_ol @beadsland @lonnon @neauoire i empathise with that vision, but i also know the makeup of the “user” population today is very different from the days of hypercard and basic. except for the outlyers like us, people use computers because they need to now, they’re a requirement for participation in society. they don’t care about writing their own scripts and you can’t make them care. that doesn’t mean they don’t matter.
Bracketing our societal dereliction of infrastructure, as such, I'd argue that institutional procurement systems that hew to _specifications_ for ordering the design and deployment of computing systems is a significant factor.
It happens that this framing—emergent of the structured programming turn—fits well the industrial bureaucratic mode, even if it poorly represents (as in, not at all) the fluid process of iterative development and user testing.
@zensaiyuki @s_ol @lonnon @neauoire Folk using hypercard didn't care about writing scripts, if I recollect that era at all, more often than not, they cared about communicating a story in new and experimental ways, and hypercard provided visual intuitions about how to do that.
The key is that scripting wasn't a "feature" of hypercard tucked away in a submenu. Organizing the presentment of ideas and the relating of thoughts was the purpose of hypercard. And so those who wanted to do that, did so.
But the UI issues come of an epistemology. That those checklists that came out of Xerox PARC are largely forgotten is that they represent a different way of thinking about human-computer interaction. A way of thinking that fundamentally incommensurate with the current episteme, because the current episteme doesn't articulate interaction, as such.
A breakneck development pace means you can’t slow down and consider whether things are a good idea or optimize things, and cheap performance makes optimization even less important.
Palm had been one of the first companies in the PDA market (having done the handwriting recognition and PIM for the Zoomer PDAs), and with those devices having basically flopped, they had the advantage of having learned from their mistakes.
Additionally, when Palm was learning, and preparing Palm OS, the PDA market was already considered a failed market, which reduced competitive pressures, and gave them time to think things out more thoroughly. (Note that this is why the first Pilots were released as “connected organizers”, not PDAs - retailers didn’t want PDAs, whereas organizers were an established product category.) This actually became a disadvantage when competition did appear - Palm moved too slowly to adapt to smartphones becoming a thing, and was forced to bodge phone support onto their OS, which seriously negatively affected stability. (And there was never a usefully good browser.) But, Palm OS was excellent in its prime (up to about 4.1) just because it had a chance to be so focused.
Contrast with modern mobile devices, where robust competition means everyone has to release new features at a breakneck pace, not having a chance to stop and ask whether things are a good idea… or even knowing something’s a bad idea but having to ship it anyway. And, then, there’s the whole thing where maintaining the eleventy billion layers of abstractions that should have never been created, but can’t be collapsed, costs so much that even with absolutely absurd economies of scale, you either have to pay the Apple Tax (although iOS has been buggy for quite a while, and Android flagships have gotten hideously expensive too) or deal with surveillance capitalism.
And of course the browser is its own eleventy billion layers of abstractions, which is why we’re down to a surveillance capitalist and a non-profit that’s selling its users out to surveillance capitalists making modern browser engines. (My own thought for a gordian-knot-cutting mobile device boils down to “pull an Opera Mini and run the rendering engine on a Xeon/Epyc in a datacenter somewhere”, which at least gets it off of the mobile device and likely reduces bandwidth requirements as well, but that’s not the most satisfying answer.)
Merveilles is a community project aimed at the establishment of new ways of speaking, seeing and organizing information — A culture that seeks augmentation through the arts of engineering and design. A warm welcome to any like-minded people who feel these ideals resonate with them.