Our HyperJam way back when sort of broke my brain, or at least my brain's relationship to computing. I started wondering why it's so hard to harness the computing power that we theoretically have at our fingertips, even decades after an elegant tool like HyperCard was born. I've been really enjoying http://worrydream.com/MediaForThinkingTheUnthinkable/ and it's got me excited about computing again.
being arrested in Portland
What happens if the federal mercenaries arrest you in Portland.
Note the absence of Miranda rights, no phone call, not being informed of the charge, no charging documents, and not getting your belongings or your ID back.
The mercenaries keeping your driver's license, wallet, and phone after they kidnap you is a deliberate attack on your life.
If you protest, carry a copy of your ID and not real ID.
I dunno who deciphers things in 2020 for a living anymore, but they probably have a workflow I'd like to borrow.
Hopefully it doesn't lead me back to the Creative Suite, though! 😅 Forgive my transgression, Merveilles! At least I got data out of a "proprietary" format. 😉
The only reason the transcript was possible is because I could envision a workflow where I could take some concrete (and, when possible, automated) steps towards deriving and collating information without data loss.
Now I think we need a workflow for *decipherment*, which I've never done before. A way to annotate the text with hypotheses, like "I think these two letters are a suffix" or "maybe these four letters have this meaning".
My point is, despite leaning on the executables distributed by techno-feudalists, I've got what I believe to be a complete and accurate transcript of Traditions of the Scattered Path, which *is* shareable with other people, since it's a work derived from the copyrighted art. I'm hesitant to publicly share it for the short term, but a group of enthusiasts reached out and we're turning this text file over and over again in a Discord.
Another NodeJS script took the sorted glyph directory structure and reassembled the text. There were plenty of flaws, so as a verification step I made a close-enough font and re-typeset the whole booklet.
Adobe was like, "Save your work to the cloud?"
And I was like, "HISSSSSSSSS!"
Though ironically I did end up saving my work to Google Drive, and anyway my dependency on Creative Suite is still unfortunate, I really want to distance myself from these tools in the long run.
By aligning the scanned pages and slicing them all into line images, I was able to build a directory of PNG lines of text whose names were indexable. A little NodeJS script broke those into ~17.5k glyph images, which I sorted by hand. I used some tricks to sort most of them quickly, but wound up manually sorting about 5,000 of them.
This is like a new, baby version of Codex Seraphinianus— there are meaningful illustrations, and hundreds of lines of expertly typeset text that no one's managed to decipher yet.
For the longest time, the chief obstacle was that there was no transcript. People were trying to transcribe it by hand, substituting the glyphs with letters of known alphabets, but this was tedious and error-prone.
You can maybe guess where this is going.
That said, for the physical edition for Nintendo Switch, Jason Roberts— the game's creator— made a companion booklet called "Traditions of the Scattered Path", in a new invented language that *does* carry textual information!
Well, the game's got no dialogue, but it does depict text— it's just text in a constructed alphabet, that doesn't convey any textual information. It's there for show.
At the end of the day, this is an almost pointless exercise. I basically remade Flash, in a way, which in this case is bad news. A game this simple should be written simply but also run simply.
So it's back to my original strategy, using this view to render an SVG UI "in situ", and modifying the WebGL game to load in SVG assets for everything.
I've sure learned a lot though!
While I'm still researching ways to solve or mitigate it, I think that no matter what I do, every time I update the camera position I'm changing a CSS variable on the capsule div, which recalculates the style of absolutely everything. Per frame.
Maybe that's fine, but I suspect this is related to why the demo runs poorly in Chrome (though it is buttery smooth in Safari).
One thing's for certain: none of this is a problem in WebGL Verreciel.
Here's the whole scene with the perspective CSS property commented out.
Again, everything is 2.5D, which works well most of the time, because most of this stuff faces the "camera". Wires that intersect the middle of the capsule are an exception, as their SVG is perpendicular to the "camera", and so a hodgepodge of ortho line segments is used as a stopgap.
Looks good, right? But again, it's incompatible with VR and has what I think is an unavoidable performance bottleneck: CSS recalculation.
Remember when I posted that CodePen two or three weeks ago, and was like, "look what I can make with SVG+CSS, but it's not like it's the *real* #Verreciel haha just a little browser trick nothing serious!"
😅 Well... after promising I wouldn't invest too much time in getting it to replicate basic Verreciel functionality, I went and did the math necessary to drive wire connections.
It also supports web fonts, Unicode, and overengineering.
Merveilles is a community project aimed at the establishment of new ways of speaking, seeing and organizing information — A culture that seeks augmentation through the arts of engineering and design. A warm welcome to any like-minded people who feel these ideals resonate with them.