@akkartik mhmm, it depends, I'd say time? But uxn just waits if a frame takes longer than expect so it's not really noticeable.

This week, people revealed in the mailing list that they were using fancy stashing techniques to spread logic over multiple frames, I realized that I was using the update vector for almost everything, and maybe I shouldn't do.. that-

I haven't written an application that was larger than 20kb yet, so not space.

@neauoire Do you use any dynamic memory? Like, lines.love is 40KB, but it usually allocates a few tens of MB.

With LÖVE I've been finding that CPU is plentiful -- as long as I don't use too much memory, overloading the GC.

@akkartik I've been seeing the words garbage collection flying around all day on here, and I went to look it up and I don't understand what they mean.

Is uxn garbage collected?

@neauoire @akkartik AFAIK uxn is fully non-GC, but that's also because it is fully programmer-managed memory anyway out of that static 64KB block.

the rough tl;dr is: in a language like C you call malloc() to get a block of heap ram at some given size, and have to remember to free() that ram later or it'll leak. in a language like, say, Lua, strings just "come into existence" as far as the developer is concerned, and the VM tracks its lifecycle (and calls free() for you when the string "dies")


@klardotsh @neauoire I think uxn not only doesn't have a GC, it has no heap allocation of any kind. There's no equivalent of Forth's ALLOCATE. Which means that if you limit the stack and load a small program, most of the 64KB can't ever be used.

(Which is totally fine! Great, even!)

· · Web · 1 · 0 · 0

@akkartik @klardotsh we tend do that sort of stuff in program-space instead of it being part of the assembler I think.

@neauoire @klardotsh Yeah, all heap allocation whether malloc or ALLOCATE has to happen at runtime. It doesn't matter what the source language is.

@neauoire @akkartik @klardotsh if we REALLY want to be pedantic, UXN is garbage collected in that the OS gives it a block of memory (which you keep) on VM boot and when you kill the VM the operating system reclaims the memory. Just part of why its so hard to pin down “low level” on modern machines/operating systems.

@neauoire @akkartik @klardotsh has been quite delightful learning that you learned about garbage collection today :) a constant reminder of how unpredictably varied folks exposure is to... everything.

as far as i can tell the go-to uxn approach is to statically allocate however much memory you think you'll need?

you could write a heap allocator or GC for uxn, and it may be useful if you wanted to do something with dynamic needs; variable size level + variable number of enemies in a game for eg

@neauoire @akkartik @klardotsh for the most part in 64k there's just not _that_ much room for heavily dynamic requirements, so you just don't do it. might farm it out to the disk instead and use files as dynamically sized records.

slightly funny; if you went down the lisp route rather than the forth one you'd almost certainly had to write a GC right at the start, as it's pretty fundamental. but you wouldn't have written an assembler, probably. different challenges in different spaces.

@neauoire oh and, this is definitely the kind of thing that would get you cancelled on HN but celebrated here. Shows your focus on creating rather than, for lack of a better term, computer science

@maxc @neauoire eh HN is more about ‘can i make money from this?’ and little care about computer science proper.

@peregrine @neauoire yeah but you DEFINITELY can't make money with your hobby vm if it doesn't have garbage collection!!!

@maxc @peregrine @neauoire on HN you're also going to find the crowd that are convinced that having a GC will make the language useless, so clearly Rust should be used.

There is a certain predictability to the comments there.

@maxc @akkartik @klardotsh I just woke up with a realization about this,

So, recently I was implementing cons cells in Uxn, and I stopped because I couldn't figure out how to handle the data that I pop out from between two cells, it would create all that wasted memory space, so I flagged these addresses into another list to recycle them, but in the end I stopped because I figured that maybe my implementation was just not a good way to do this.

But maybe that was a form of garbage collection

@neauoire Yeah, this is what @maxc was alluding to with, "if you went down the lisp route rather than the forth one you'd almost certainly have to write a GC right at the start." Lists and trees by their primitive operations make ownership and lifetime questions more thorny.


@neauoire @maxc @klardotsh Hmm, it's not just lists and trees. The moment a language has syntax for any literal data that can be returned from a function, you have to think about how long to keep it around for. Like Python arrays or Lua tables. Watch out for this if you ever find yourself wanting to construct quoted blocks in uxn. That's a big change and still change the flavor of the language.

@akkartik @maxc @klardotsh I have a very low pain threshold for pain, and having to go back and clean things up every couple of frames seems like a .. painful computing model, I doubt I'll be venturing much further into that space myself

@neauoire @akkartik @maxc @klardotsh I like how retroforth deals with strings. If you keep them they get space from the heap, otherwise they go to a string pool and eventually get overwriten.

@neauoire @akkartik @maxc @klardotsh
> Lifetime

At the interpreter, strings get allocated in a rotating buffer.
This is used by the words operating on strings, so if you need
to keep them around, use `s:keep` or `s:copy` to move them to
more permanent storage.

In a definition, the string is compiled inline and so is in
permanent memory.

You can manually manage the string lifetime by using `s:keep`
to place it into permanent memory or `s:temp` to copy it to
the rotating buffer.

@neauoire @akkartik @klardotsh different pain for different people :)

should be noted that for an integrated gc you don't have to actually do anything about it as the user.

when you write `local t = {}` in lua you make a new table. it gets cleaned up by the lua vm automatically when nothing could possibly refer to it any more. burden is on the vm to do that, but it's on the programmer to not make so much garbage data that the vm has to waste a lot of time cleaning up your mess!

@neauoire @maxc @akkartik @klardotsh that's not unusual, that just sounds like a basic allocator.

@aeva @neauoire @maxc @akkartik @klardotsh The term "the heap" for all dynamically allocated memory in fact originates from the practice of storing exactly such a free list in a heap data structure.

(I assume at some point you want to add some minor additional smarts to coalesce adjacent free blocks, but I'm not familiar with specific implementations.)

@mcc @neauoire @maxc @akkartik @klardotsh a basic heap allocator is probably a fine place to start either way for something like an uxn application as it's easy to reason about and easy to replace if perf calls for it. does uxn have perf instrumentation?

@aeva @mcc @maxc @akkartik @klardotsh uxntal doesn't have built-in heap allocation, but it can be built in uxntal itself if someone needs it in the programs.


I haven't had to make use of that yet, personally. Whenever I need modifiable strings, I store them in a buffer with a fixed size in the zero-page.

What are perf instrumentation?

@neauoire @mcc @maxc @akkartik @klardotsh performance instrumentation. this would either be part of the uxn emulator, or something that attaches to the emulator, or something the application would add hooks for, and it measure timing information and other things (eg power consumption) to help you identify bottlenecks by severity and thus what actually needs to be optimized in a given application.

@neauoire @aeva @mcc @akkartik @klardotsh at its most basic, profiling can just be a list of numbers indicating time spent in different regions of the program; usually either a running total or last X counts (frames or whatever) to get min/max/mean/median from.

Ideally you can nest regions.

Could be instruction counts or wall clock time or battery consumption or, whatever.

From that you can see at a coarse level "updating the rabbits is slow" and start reasoning about why.

@maxc @neauoire @mcc @akkartik @klardotsh it's worthwhile skill, as improving the computational efficiency of programs also improves their energy efficiency

@maxc @neauoire @aeva @akkartik @klardotsh One nice thing is you don't have to write most of the profiler yourself; there are existing profiling tools, and there are ways you can do things like call a function at periodic times inside your interpreter and have an external profiler time the differences between that function call.

The difficulty with automatic garbage collection in uxn is that it is not clear when something can be collected, because there is no lexical scope. For my cons lists, right now I don't delete them at all, but if I did it would be via a "delete" or "free" call; and in a similar way I would coalesce with an explicit command as well, as it is a very expensive operation.

Once I have cracked the problem of cons lists in my lamdbas, maybe I could use the lambdas as lexical scope and have proper garbage collection. Sounds interesting.

@maxc @akkartik @klardotsh

Sign in to participate in the conversation

Revel in the marvels of the universe. We are a collective of forward-thinking individuals who strive to better ourselves and our surroundings through constant creation. We express ourselves through music, art, games, and writing. We also put great value in play. A warm welcome to any like-minded people who feel these ideals resonate with them.