Follow

various news:
- My positive-future worldbuild is a finalist worldbuild.ai/W-0000000281/
If I win, Max Tegmark will immediately self-actualize me. If not, well maybe the real solution to the alignment problem were the friends we made in the comp discord.

- Visited VRChat. The entire human world is going to move here over the next decade. Feeling restless.

- Making a toy (game?) to find out how applied utilitarianism actually behaves, and whether the consent principles from Propinquity Cities fix it

· · Web · 2 · 0 · 5

@faun So cool to hear from you, mate!

Could you go deeper on the VR Chat story? Why do you feel that you think the world's going to move over there?

Also super curious about the toy/game project!

@raelzero I mentioned that I'd be coming in for 2 hours at a random time to a transhumanist VR community and a bunch of them turned up for me and showed me around ;_; Me, a mere slimegirl.

afaict the vibes in VR communities are *generally* stronger than most easily accessible real world communities, even now. Eventually the technology is going to be more practical than traditional screens for every purpose and everyone is going to have one, and then who's going to bother with the physical world

So I'm thinking... there are different ways this could go. Different systems that could become the standard, and the standard systems will determine the character of basically the entire human social world. I want to forecast the crucial forks and try to improve the path we're on. I think the funders would recognize the importance of this.

One fork I see is the youths' experience of it. Do it right, and VR can be a vibrant, supportive, immersive learning environment. But unless a project is made of it, I expect this society to fail its youth. Right now, we just rotate them between the prison, the mall, and the street, and I'm concerned it's not going to be any different in VR.

I'm concerned that there's going to be a very school-like "learning app", that will be like eating cardboard, there's going to be games, and there's going to be vrchat, which exist in a separate world that kind of only detract from learning/health outcomes. It might be better than what we had, but we can do much better than that.

@faun Well, that's an interesting thought, thanks! I'll probably try to play with VR Chat as well, and see what happens :D

Reading Snow Crash currently. The process of picking an avatar in VR Chat reminded me of a description early in the book where there's a description of how implicit social classes based on the quality of their avatar's models.

(replying on the other things in separate messages)

@raelzero Went and sprunged an article about that: researchgate.net/publication/3

In reality I expect class differences will be a lot more subtle than that, as the technology proliferates quickly, data quicker, and creating the systems to protect proprietary avatars would be unpopular with devs (difficult and unpleasant to make), and would make its platform platform significantly less appealing to users (uglier, stifling),

@faun Seems super interersting, thanks!

Just a bit worried about spoilers for Snow Crash tho, so I'll hold on the paper a bit :P

I absolutely also think class differences will be subtler: in tech, they already are well beyond the "more money --> more expensive gadgets" phase.

Naive to think the trend would reverse when adding even more tech, and potentially even more extractive business models.

I do, however, see many potential benefits e.g. in terms of self-expression.

@raelzero The utilitarianism thing might not end up being very fun for others, since it's a "game" that involves ingesting lots of arbitrary (generated) information that you can't even see all at once, and steadily untangling it to optimize a number.
The number represents how happy everyone is, though, so it's a more meaningful number than the number that you optimize in most games.

@raelzero Most of the reason I'm doing this is probably just that I enjoy using Rust though x] I'm learning bevy engine. It's immature, but it's cool. I managed to kinda replicate the simple/succinct animation system I made in blibium engine.
It blocks bevy's parallelism when it's running, but hopefully it'll never use much cpu.

@faun Well, looks like a nice excuse to learn a new language :P

The idea seems super conceptual — not that I'd expect anything less from you :D.

Looking forward to the results!

@faun I finished reading the worldbuild. Bravo! Didn't expect transhumanism and VR to play such a heavy role in this timeline. Enjoyed the colorful interplay of those forces.

There are bits I have questions about. E.g. in what scenario would an ontological shift make the AI stop doing work to improve the world. If it's ok, I'd love to probe you for some details in this future.

@changbai Sure. Although note that there's an appendix forum.effectivealtruism.org/po and I'll be doing a podcast with the FLI soon about it.

But, for value binding... hmm..

I guess I'd paraphrase the issue as, how do you get a machine to strive to improve an external reality when sensors cannot directly perceive external reality? Sensors only perceive images, references, never the thing itself. At some point, creating fake images (with a screen) becomes easier than influencing reality.

@changbai Eventually the agent figures out that there's an external reality, which it resides in, but this doesn't necessarily cause it to update its understanding of its goal. If it was trained in a world of images, it might still only care about the parts of reality that let it generate and sustain images. Building screens for itself and guards to keep us from turning the screens off.

@changbai I wish I could just say "do it the way humans do it", but I don't understand how humans do it, and humans *still* frequently fail at it. I don't think evolution actually solved this problem ;-;

@changbai Anyway, so, the ontological shift problem is like, the second level of this, even if you get it to a point where any regular camera you give it, its values recognize the entities behind the images as being part of the same world,
How do you make sure that if you give it a sensor that sees a different part of the world, in a radically different way, that its valuation stuff will care about humans in there.

Sign in to participate in the conversation
Merveilles

Revel in the marvels of the universe. We are a collective of forward-thinking individuals who strive to better ourselves and our surroundings through constant creation. We express ourselves through music, art, games, and writing. We also put great value in play. A warm welcome to any like-minded people who feel these ideals resonate with them.