The more I write in the [[agora]] the more I find myself naturally gravitating to [[rdf]]

@flancian did you see my recent thread about the PL i'm working on? it's an appropriate example of what a general data model (though not a prescriptive universal data model) might look like for my research. i pinged your other account (on merveilles), not this one, so idk if you saw it

@flancian btw is there a spec for a machine-readable interface to the agora of some sort? i assume there must be, given its federated nature, but i couldn't find one in a few click-throughs of the related github

@flancian "Once we get down to details, the Agora currently just checks out git repositories in a loop (this likely won't scale, but it might be good enough for the first few hundred users)"
oh hm this isn't an ideal interface as far as like... isolating complexity is concerned. though thankfully vessels infrastructure is designed around those cases: i'd write a conventional vessel that handles the use of libgit2 or something and then talk to it from within welkin (my PL) optionally

@syntacticsugarglider nice! Isolating complexity in which way? Isolating which subsystems from each other?

@flancian git is very complex and very little of that seems directly relevant to the display & distribution of the structured information present in an agora's exposed interface

@flancian i have what I think are some extremely interesting ideas about how to replicate versioned data if that's a necessary property, in a general and composable way, using properties of the tools I'm working on right now. if anything comes of it i'll share my thoughts in more detail. it's perfect for structural/interlinked things at scale: you can view an entire indefinite system as a replicated structure or replicate arbitrary sub-units of it without prior planning. quite elegant i think

@syntacticsugarglider nice! but, see, what you said in my mind maps to git just fine? a file can contain a pointer; in particular text, a program.

@flancian you can do just about anything with any tool. the question is whether the required invariants are embedded in that tool as minimally as possible and in a way that makes them reliable, i.e. that the system cannot enter into a state that is not appropriate for the task at hand. it is possible to implement elegant convergent replicated filesystems on top of git, but in my opinion it makes very little sense to use large monolithic tools like git.

@flancian ipfs is a great content-addressed store and i will be at the very least implementing a vessel to provide interoperability with it for the many things i'm implementing that are suitable for content-addressed storage. i don't know much at all about IPLD. however, i do know that these things are designed to handle static/immutable data structures as opposed to the replication of dynamic versioned content. you can use a log, but then (1/2)

@syntacticsugarglider @flancian IPFS also has a versioning system. You can publish a signed block containing a link to the new version, signed and newer versions of old blocks get forgotten. It's probably not the best possible way to do it (last time I tried to use IPFS it had a lot of scalability issues, dunno if they're still there) but it works.

@mithrandir @syntacticsugarglider nice, I have it on my todo list to do some kind of "hello world" for ipfs this weekend. I've only used it through the CLI so far.

What about [[ipld]]? Does it help?

@mithrandir @flancian IPLD is super cool and i'll probably write a codec for the terms of my PL, because there's a ton of use-case for static long-term content-addressed stuff (i.e. the type-addressing i was talking about), you might well fetch the reducer functions for processing replicated data like I was mentioning from an IPFS store using IPLD to traverse and look up the necessary references out to other program components

@mithrandir @flancian there are two adjacent but distinct problem domains here, though:
1. convergence & replication in not-necessarily-interactive systems over a typed program-space
2. having some way to actually get access to information in a distributed system

the former doesn't actually care how you get the "frames" of operations, it just specifies how to put them together into something that preserves intent and converges on a single consistent result.

@mithrandir @flancian IPFS basically solves the latter problem, and you could totally put a series of frames for a replicated structure on IPFS if you wanted to, and doing so in an append-only log sort of way is actually perfectly reasonable because you get the antecedent-identifiers (but not the causal clocks) for free
in a small garden of friends i'd just use a gossip protocol or a raspberry pi sitting on my desk and relaying stuff, though. the point here is that all this is abstract (1/2)

@mithrandir @flancian over the role that IPFS plays, which is very... infrastructure-analogous, I guess. Of course I have to say my usual disclaimer yet again wrt the use of the PORDT formalism that "this is just an opinion that is not special-cased in the noocene project", if you have different ideas and they are somehow less privileged by my decisions then I have substantively failed to achieve my goals.
Regardless, the thing to compare to what i'm talking about is (1/2)

@mithrandir @flancian stuff like Treedoc, though it's exceedingly old and rudimentary. Or, in informal space, things like Y.js or automerge. Or, the closest thing, though it's quite niche and not really well-known despite being really beautiful, the replicated object notation of replicated.cc. What I'm talking about is basically a substantive enhancement of that last one leveraging the properties of a specific highly expressive programming environment.

@syntacticsugarglider wow, this is *great* stuff! I thank you kindly. It's like you just gifted me a goldmine.

@flancian ah, what's the source of interest? the specific mention of replicated data models?
this absolutely *immense* blog post: archagon.net/blog/2018/03/24/d
is what led me to these recent conclusions. i've been thinking about CRDTs for a long time, and that's the general concept that all of the things I mentioned above fit into, but that article introduced me to PORDTs and RON which gave me a nice denotational formalism for composing these things, which was what I'd been looking for.

@syntacticsugarglider I hope this doesn't come across wrong, but I would recommend you consider using the [[agora]] to write down your thoughts -- by this I mean, publish to it. You use acronyms I don't know, heavily and often -- this is an ideal case for linking. I research stuff and actually document acronyms in the agora often, but it takes time :) and if we do it in a common platform, we only do it once. It might scale.

@flancian yeah using *anything* with linking or writing long-form things with refs out to external explanations of specialized concepts would be vastly better than writing all these super long involved threads on fedi
i could have produced so much guide-level work in the time i've spent having these interactions in general
but i have like... a mental block against using tools that i don't have extensive familiarity with or investment in at this point

@flancian by investment i mean like, direct involvement in the development of.

@syntacticsugarglider I mean, I'd love to rewrite the [[agora]] in [[rust]]. I'd do it if I had the time and skill; and I hope to get both, or some reasonable fraction.

In essence it's the idea that's important, I believe. The more implementations the merrier?

@flancian i wouldn't rewrite it in rust personally, i'd write the minimal interface of non-formalizable things in Rust (probably just libgit bindings) and then provide those objects to another vessel running a welkin-core evaluator and then create welkin abstractions around that which could then be program-extracted to Rust using type projections from the inherent reflection
but yeah, interacting with external systems was important in designing all the vessels stuff. your agora is a priority.

@syntacticsugarglider can you help me out with filling in these nodes?

anagora.org/izzy-swart

What are noocene, vessels, welkin in one or two sentences, and what is the most representative URL for each of them? I looked for github repos (searched in github.com/syntacticsugarglide) and didn't find the obvious match.

If you put them in a tree, which one would be top level? noocene?

@flancian
noocene
- vessels
i don't know where welkin goes. it exists to serve vessels more or less, but it's also a general-purpose programming language. it's definitely part of noocene, but i wouldn't bin it under vessels because it's independently useful and exciting.

@flancian
note: i'm terrible at short summaries, but let me give it a shit
noocene: both an overarching implementation project and underlying philosophy and metaphysics designed to facilitate unprecedented freedom of human creative expression, collective externalization of thought (i.e. shared exocortices), large-scale semantic processing, and the general embrace of the unknowable and infinite through transformative tools that operate on distributed program structures that embed human thought

@flancian
(URIs for the above include noocene.com and github.com/noocene, the former will likely be canonical once i have some long-form writing to put there and the latter is where all of the project work will be hosted for the foreseeable future)

@flancian
vessels: implementation work intended to facilitate the above goals. Specializes somewhat but still intends to be separable and minimal
key components:
github.com/noocene/protocol - distributed object model for rust. not actually a protocol, more of a generator for abstract meta-protocols
github.com/noocene/looking-gla - rudimentary portable type system for rust
and then the isolation/container system, which is the actual "vessels", rests on protocol and is split among multiple repos on that org

Follow

@flancian welkin-core: a programming language with a a dependent type system based extremely closely on Victor Maia's EA-TT that serves simultaneously as a consistent proof environment for formal reasoning about complex programs in a composable way, as well as a high-performance general purpose programming language that can be evaluated in interesting environments (constrained things like MCUs, parallel things like GPUs, etc.)
it's very small and barely human readable, it's more of an IR

@flancian the canonical URI for this is github.com/noocene/welkin-core
this basically serves as a unified *abstract* data model (independent of serialization, which can be reasoned about in the language itself) and program semantics, as well as a distributed type system using Unison-style term addressing.
it's an *incredibly* expressive foundational core for type-safe composition and formal reasoning about distributed systems

@flancian it's not inherently tied to vessels in any way, using it with vessels just means you run an evaluator for it inside a normal vessel just like any other (which in the current version are WASM binaries, so you'd compile the welkin-core net evaluator, which is provided as a library, into a container that exposes a `protocol` that lets terms be evaluated, and other vessels would request that as a capability and use it as a runtime)

@syntacticsugarglider thank you! here we've gone way beyond my sphere of knowledge, so I need to ask something simple:

Why do we need [[welkin core]]? What's impossible without it? What's harder than it should be without it? What's a nice to have?

anagora.org/welkin-core

@flancian it allows composition and self-modification of typed programs at an arbitrarily granular level. basically, this adds another layer of composition to vessels. there's the "vessel", which is somewhat monolithic. it's a WASM binary. it can generate new binaries, but that's usually costly, but vessels can be quite performant and are easily targeted by a variety of existing tooling.
then you have anything in welkin-core, which is just a... space of terms.

@flancian because it has a dependent type system you can write *proofs* of properties of systems, and the axioms of those proofs can just be capabilities acquired from the environment. Thus, you can choose your assumptions (if any) and then compose programs in an environment without perfect trust not just by isolating them but by absolute confidence that they will perform as expected due to the ability to formalize basically any guarantee about their behavior that you can think of.

@flancian this, for example, when considering the data replication stuff I was talking about earlier, is what lets you use arbitrary programs as reducers. You can represent the identity type, idempotence thereby, as well as associativity and commutativity as logical propositions at type level by the Curry-Howard correspondence, and if you have well-typed inhabitant terms they constitute a proof. Thus, there's no need to assume code is correct that could break convergence.

@flancian there is an immense amount of power to this, the entire way vessels represents semantic relationships is based on thinking about types as logical propositions about one's subjective understanding of reality in general, so being able to prove things about those relations and generalize them in interesting ways is exceedingly useful. there was always a plan to have some sort of formal overlay type system and i've only recently developed the background to be able to execute this.

@syntacticsugarglider so you see it as a formal language of particular power, perhaps an intermediate representation format for human-and-higher-level thought? Something else?

@flancian vessels? the noocene idea in general?
the noocene metaphysics in general is the idea that universals are absurd, our perceptual framework *is* our reality and we can externalize parts of that perceptual framework into anything that can represent the relationships held therein
the lower the friction of that embedding the more we can inhabit those spaces, and that friction will be lower as expressiveness increases and granularity increases
the collective result is unknowable

@syntacticsugarglider here I diverged, my apologies :) "I like lisps and haskell but I know python" is where I'm at. I'm a basic kind of guy honestly.

Is this whole system for people that get what you're saying (that's fine; I could try to become one of them)? What does it *do* for others?

@flancian this is like asking me what the laws of physics do for others, having never lived in the world
for me i'm going to build a bunch of analogues to existing software tools and because they're all built out of tiny components in the same abstract typed universe i'm going to manipulate them subtly and rearrange them and produce shared abstractions and attempt to converge on a unified subjective inhabitable reality of computation as a pure final shape of the way I think about the world

@flancian for anyone else i have no idea
i have long-term socio-ontological ideas about the implications of these things, that we can *fundamentally* alter the way we think about reality and others, build a legitimate shared cognitive space operating over a simple underlying propositional language that can be arbitrarily abstracted and transformed into things that make sense to each of us personally, or to our place in the convergence/divergence that emerges as our collection

@flancian but the whole *point* of all this is that I don't want to be responsible for the world others exist in. I want vast multitudes of incomprehensible but subtly interconnected realities where everyone is free to create and modify and compose and reshape at will with the maximal amount of expressive capability and the minimal constraint. I get asked where "fundamental novelty" is a lot but nothing is ever fundamentally novel, transformation comes from the removal of barriers.

@flancian It's all about giving the unbound will a path of least resistance to plurality.

@syntacticsugarglider yes my friend, in some embeddings in such a space you and me are one.

@syntacticsugarglider but at what cost? Who is the user, and through which (programming or user) interfaces?

@flancian i mean this has to be my response to most questions like this. the short term answer is that i can, like with the rest of my work, trivialize all of the incidental complexity i see in contemporary software. the medium-term answer is that i can make my environment arbitrarily flexible and everyone can have a unique transient composition of the world into their own framing.
the long term answer is i have no idea what that looks like for anyone else or even me in a couple years

@syntacticsugarglider right, yes, here we converge. The [[agora]] is much simpler than this but I think the same way about it; it is *amazing* for me, and I dream of it being amazing of others. How to get there usually eludes me :)

@syntacticsugarglider it is mostly because of this, not primarily in pursuit of elegance, that I also try to keep it "as simple as possible". As I know you do. Fewer nodes -> less context to transfer -> more efficient communication, perhaps (my dream) more generality in usefulness.

@flancian mm... simplicity is *also* in pursuit of elegance for me. the reason I care about this so much is because of my relationship with reality. i dislike it, frankly. for a variety of reasons my concrete lived experience has been deeply unpleasant for a long time, and moreover that's left me with a conclusion that i am particularly ill-suited for our shared prescriptive reality. (1/2)

@flancian yep. especially in my discussions with @natecull i've talked at length about the idea of the "narrow conceptual neck", which is the unifying transformation-set between subjective perceptual realities (a role typically provided atm by natural language abstracting over shared lived experience, though this is *deeply, even *inexpressibly** inadequate) (1/2)

@flancian this transformation-set, which can be computationally realized as a language of equivalences over propositions (or any analogue) seems to be entirely equivalent to what you're talking about here

@flancian this is a *core* idea in vessels, and was a key realization I had years ago that might well have been the most important catalyzing factor. it took me from wanting to design an unusually flexible plugin system to developing a completely new metaphysical framework and understanding of the possibility-space of computing.

@flancian well, core idea in the noocene more generally. it's the reason the noocene even works at all as a plural space of many agents (human or otherwise).

@flancian @natecull i absolutely know of borges and have been meaning to read a lot of his work, especially "Tlön, Uqbar, Orbis Tertius", for quite some time now. haven't got around to it.

@syntacticsugarglider @natecull I'd also recommend:

Library of Babel
The Aleph
The Garden of Forking Paths
Death and the Compass

As a sampler

@flancian but ultimately the reason I'm still alive, very directly, is because I have this work. despite everything that's been unpleasant about my life I have a way to give people the power to inhabit something beautiful, in the way that the hardly inhabitable space of computing as it exists today has still been my one solace and escape from the rest of what exists. but that beauty is key. i need to feel, as type theorists often say, that my embedding is "morally correct", meaning elegant

S@syntacticsugarglider@merveilles.town sorry but I don't get it :) why isn't a graph whose nodes are, say, a lisp enough? :)

@syntacticsugarglider why do you call it [[welkin core]]? What is the rest of [[welkin]]?

@flancian i was about to get to this. welkin-core is designed to be easy to evaluate but also preserve any type-level properties necessary (it's basically the calculus of constructions with induction, only unlike how that's typically done it doesn't need universe levels, guardedness, or a primitive datatype system. it's all just function types)
so it's a perfect target for multiple high-level languages to participate in the same typed distributed system

@flancian the main welkin repo is at github.com/noocene/welkin
which is currently a bootstrap compiler written in Rust from a higher-level language that's sort of analogous to conventional functional languages to welkin-core terms
and I'm going to use that to implement a self-hosted "macro"-esque descent compiler based on an agda-like TC reflection monad
basically this will allow the IDE and "source" of the language to be just as flexible as the rest of the system

@flancian i could conceivably thus create an editor that is an actively running program which self-modifies as you edit code, where you can turn sub-components of the program (individual expressions) into things that are visually displayed in interesting ways, i.e. akin to a Mathematica notebook but generalized and self-reflecting/modifying
so programs/program editing are unified with manipulation of arbitrary user interface/incorporation of structure from the general computational graph

@syntacticsugarglider hmm, why is designing the ideal [[ide]] part of the project? Why shouldn't people be able to use vim or vscode? Again, a basic question, but I want to understand why welkin is needed instead of just using rust. This feels like one link too many, perhaps? You are designing the low level language *and* the high level language? Why are both needed? If you see what I mean.

@syntacticsugarglider early feedback: this is where I think treating your whole system (at the level of one entity; say noocene?) as an abstract data type and writing a one-pager about the interface could perhaps help.

I'm saying this because, well, "one new language" is where I draw the line by default. At least it feels like we've gone beyond "as simple as possible", so far.

@syntacticsugarglider I think it's where I expected it to be -- I'm just chaotic :)

@flancian describing welkin as a "language" is a bit misleading on my part, it's more of a simple bootstrap language that I'm building as a short-term ergonomic solution so I can *design away a language entirely*
I describe this IDE because it's just an arbitrary transient composition of vessels, like any other, over transformations that eventually resolve down to welkin-core terms
It's not really "a single IDE", because there is no software in the noocene
Just the transient state of one's world

@flancian what exists right now in welkin is *exclusively* syntactic sugar over the core to make this process of building up abstractions and transformations over it easier
it's not really a separate language
the term "welkin" will eventually refer to *any* system that generates welkin-core terms, generally implemented in welkin-core itself (through some other transformation system)

Sign in to participate in the conversation
Merveilles

Merveilles is a community project aimed at the establishment of new ways of speaking, seeing and organizing information — A culture that seeks augmentation through the arts of engineering and design. A warm welcome to any like-minded people who feel these ideals resonate with them.