It's fuzzy, but I think it was because I was learning GB assembly while working on shaders in Houdini or something (I'm a tech artist). The two worlds collided in my head, saw that there's no native multiplication on the GB, and figured it'd be a fun problem.
I mean it’s cool and all but it’s like making a painting entirely out of tiny dots with your hands tied behind your back. I’m happy for their achievement and it looks cool but it shouldn’t throw any shade on those of us who just like to use a paint brush instead.
Genuinely curious, what’s your goal here? Disparage those who use LLMs? Or just express your unhappiness at the amount of ai content on the HN front page? Or just want to throw shade on LLM use in general?
This is impressive and cool but I don’t understand the bitterness here.
I've only been here 8 years but it seems like there has always been such a topic sucking the air from the room at any given era.
This inevitably results in even the completely unrelated topics constantly becoming a reference to that conversation.
That has it's own wake of someone discussing how it's brought into every conversation by those that either love/hate - further making it suck even more air out of the room.
At this point the ink catches up with itself while folks such as folks like Danny Spencer occasionally deliver us the quick doomscrolling hit we were all really here for.
? I'm not any of the previous people talking about why you night have commented. I'm talking to your above note about bringing sarcastic comments about AI into this post not previously about AI. That said, sure - I'm probably not the best sarcasm detector myself anyhow :).
I.e. AI is such the main topic here that we still have some type of comment (sarcastic or not) bringing it up in the few posts unrelated to it. It's truly sadly inescapable on more than one level, as will be whatever the next hot topic is in a few years.
Awesome looking results. As far as I understand it's a "3D" shader in the sense that it looks 3D but it's a prerendered 2D normal map which is then lit using the resulting world space normal.
It's not that different from "real 3d" renderers. Especially in deferred rendering pipelines the rasterizer creates a bunch of buffers for depth map, normal map, color, etc but main shaders are running on those 2d buffers. That's the beauty of it parts operating with 3d triangles are kept simple simple and the expensive lighting shaders run once on flat 2d images with 0 overdraw. The shaders don't care whether normal map buffer came from 3d geometry which was rasterized just now, prerendered some time ago or the mix of 2. And even in forward rendering pipelines the fragment shader is operating on implicit 2d pixels created by vertex shaders and rasterizer from "real 3d" data.
The way I look at it if the input and math in the shader is working with 3d vectors its a 3d shader. Whether there is also a 3d rasterizer is a separate question.
Modern 3d games are exploiting it in many different ways. Prerendering a 3D model from multiple views might sound like cheating but use of imposters is a real technique used by proper 3d engines.
Unfortunately, the 2D imposter mode has pretty significant difficulties with arbitrarily rotated 3D. The GBDK imposter rotation demo needs a 256k cart just to handle 64 rotation frames in a circle for a single object. Expanding that out to fully 3D views and rotations gets quite prohibitive.
Haven't tried downloading RGDBS to compile this yet. However, suspect the final file is probably similar, and pushing the upper limits on GB cart sizes.
It's not that different from how some creative Mac games were doing 3d lighting on 2d textures prior to 3d accelerated hardware being available. The neat part here is that it runs on a Gameboy Colour.
> An overall failed attempt at using AI
> I attempted to use AI to try out the process, mostly because 1) the industry won't shut up about AI, and 2) I wanted a grounded opinion of it for novel projects, so I have a concrete and personal reference point when talking about it in the wild. At the end of the day, this is still a hobbyist project, so AI really isn't the point! But still...
> I believe in disclosing all attempts or actual uses of generative AI output, because I think it's unethical to deceive people about the process of your work. Not doing so undermines trust, and amounts to disinformation or plagiarism. Disclosure also invites people who have disagreements to engage with the work, which they should be able to. I'm open to feedback, btw.
Thank you for your honesty! Also tremendous project.
The funny thing is the phrasing used to be more neutral, but I changed the tone to be slightly more skeptical because people thought I was just glazing AI in my post. Another guy on Reddit seemed annoyed that I didn't love AI enough.
I just wanted to document the process for this type of project. shrug
It seems to me that AI is mostly optimized for tricking suits into thinking they don't need people to do actual work. If I hear "you're absolutely right!" one more time my eyes might roll all the way back into my head.
Still, even though they suck at specific artifacts or copy, I've had success asking an LLM to poke for holes in my documentation. Things that need concrete examples, knowledge assumptions I didn't realize I was making, that sort of thing.
I dunno about the need for disclosure in this way. In my working life I’ve copied a lot of code from stack overflow, or a forum or something when I’ve been stuck. I’ve understood it (or at least tried to) when implementing it, but I didn’t technically write it. It was never a problem though because everybody did this to some degree and no one would demand others disclose such a thing at least in hobby projects or low stakes professional work (obviously it’s different if you’re making like, autopilot software for a passenger plane or something mission critical, that’s notwithstanding).
If it’s the norm to use LLMs, which I honestly believe is the case now or at least very soon, why disclose the obvious? I’d do it the other way around, if you made it by hand disclose that it was entirely handmade, without any AI or stackoverflow or anything, and we can treat it with respect and ooh and ahh accordingly. But otherwise it’s totally reasonable to assume LLM usage, at the end of the day the developer is still responsible for the final result, how it functions, just like a company is responsible for its products even if they contracted out the development of them. Or how a filmmaker is responsible for how a scene looks even if they used abobe after effects to content aware remove an object.
I disclosed AI because I think it's important to disclose it. I also take pride in the process. Mind you, I also cite Stack Overflow answers in my code if I use it. Usually with a comment like:
// Source: https://stackoverflow.com/q/11828270
With any AI code I use, I adopted this style (at least for now):
// Note: This was generated by Claude 4.5 Sonnet (AI).
// Prompt: Do something real cool.
This GBC shader reveals a key truth: all computation is approximation under constraint. Multiplication becomes table lookups plus addition, while precision yields to what the eye actually sees.
I bow before the master. Genuinely outstanding work.
Since you're already doing what's essentially demoscene-grade hacking, have you thought about putting together a short demo and entering it at a demoparty? There's a list of events at demoparty.net - this kind of thing would absolutely shine there.
I’m incredibly impressed by this, largely because it actually is running on a CGB. What I see often are hacks where the game boy is just being used as a terminal and the cartridge has been packed with far more powerful processing power.
I lowkey wish Nintendo would rerelease the GBC or GBA I would buy one. They can bake in some games into a few cartridges and make it 100% worth the buy too.
You can pick a used one up for pretty cheap. Add a flash cartridge and you're done. I think the cheap android handhelds of the same form factor are a better option though.
I've still got my Gameboy collection, but rarely use it. It's just so much easier to fire up an emulator these days.
No the US government does that. He just takes the money taxpayers gave the government to blow people up with, and unlike the other defense contractors, also indirectly finances the production of gameboys with some of the money. The idea that giving money to ModRetro finances arms is essentially backwards from how money actually flows.
This is pricey but pretty awesome. Very well built, high quality. Hit up a local used game store and have a more modern hardware experience with legit copies of the actual games.
Really interesting Project, thanks for sharing. Reminded me a bit of my time coding assembly on the C64 (yeah, I'm old). For 3D (wire-frame) we also needed to find creative ways around hardware limitations, especially the lack of a multiply instruction.
Ok, and at the time, was anybody even thinking about computing normal maps this way on such hardware ? that was my original though, "maybe" this is the result of applying more recent ideas for this group to hardware that wasn't made to support it. But maybe i'm wrong and people did try.
Hi, author here. I heard it got posted here and decided to make an account, so I can hop in here. Thanks for sharing!
I'm also looking into simplifying it a bit more with environment maps, which I shared on my Bsky: https://bsky.app/profile/dannyspencer.bsky.social/post/3mecu...
This was so much fun to read! Very neat solutions using spherical coordinates and logarithms.
How did you get the actual idea to do this in the first place?
Thanks!
It's fuzzy, but I think it was because I was learning GB assembly while working on shaders in Houdini or something (I'm a tech artist). The two worlds collided in my head, saw that there's no native multiplication on the GB, and figured it'd be a fun problem.
As someone with a similar background (graphics programmer) it sure seems like a fun problem.
I honestly found the lack of multiplication instruction quite surprising. I did not know that!
It’s nice getting real hacker material on hackernews
It wasn't just a prompt to an AI? How did they do it? ;)
The lost, dark art of using one's brain to implement something line by line.
I mean it’s cool and all but it’s like making a painting entirely out of tiny dots with your hands tied behind your back. I’m happy for their achievement and it looks cool but it shouldn’t throw any shade on those of us who just like to use a paint brush instead.
Genuinely curious, what’s your goal here? Disparage those who use LLMs? Or just express your unhappiness at the amount of ai content on the HN front page? Or just want to throw shade on LLM use in general?
This is impressive and cool but I don’t understand the bitterness here.
sarcasm. In response of HN being mostly about AI now.
I’m very interested in the AI content, but it’s also a bit sad how much it became the main topic.
I've only been here 8 years but it seems like there has always been such a topic sucking the air from the room at any given era.
This inevitably results in even the completely unrelated topics constantly becoming a reference to that conversation.
That has it's own wake of someone discussing how it's brought into every conversation by those that either love/hate - further making it suck even more air out of the room.
At this point the ink catches up with itself while folks such as folks like Danny Spencer occasionally deliver us the quick doomscrolling hit we were all really here for.
Just admit that you don't understand sarcasm.
Aww man you were doing so well
? I'm not any of the previous people talking about why you night have commented. I'm talking to your above note about bringing sarcastic comments about AI into this post not previously about AI. That said, sure - I'm probably not the best sarcasm detector myself anyhow :).
I.e. AI is such the main topic here that we still have some type of comment (sarcastic or not) bringing it up in the few posts unrelated to it. It's truly sadly inescapable on more than one level, as will be whatever the next hot topic is in a few years.
Sorry
Hey no worries, sorry I was unclear! Have a good one.
Ah, so number 2. Thanks for answering!
Awesome looking results. As far as I understand it's a "3D" shader in the sense that it looks 3D but it's a prerendered 2D normal map which is then lit using the resulting world space normal.
Here are the frames: https://github.com/nukep/gbshader/tree/main/sequences/gbspin...
It's not that different from "real 3d" renderers. Especially in deferred rendering pipelines the rasterizer creates a bunch of buffers for depth map, normal map, color, etc but main shaders are running on those 2d buffers. That's the beauty of it parts operating with 3d triangles are kept simple simple and the expensive lighting shaders run once on flat 2d images with 0 overdraw. The shaders don't care whether normal map buffer came from 3d geometry which was rasterized just now, prerendered some time ago or the mix of 2. And even in forward rendering pipelines the fragment shader is operating on implicit 2d pixels created by vertex shaders and rasterizer from "real 3d" data.
The way I look at it if the input and math in the shader is working with 3d vectors its a 3d shader. Whether there is also a 3d rasterizer is a separate question.
Modern 3d games are exploiting it in many different ways. Prerendering a 3D model from multiple views might sound like cheating but use of imposters is a real technique used by proper 3d engines.
There's a GBDK demo that actually does something similar (spinning 2D imposters). Does not handle the lighting though, which is quite impressive.
https://github.com/gbdk-2020/gbdk-2020/tree/develop/gbdk-lib...
Unfortunately, the 2D imposter mode has pretty significant difficulties with arbitrarily rotated 3D. The GBDK imposter rotation demo needs a 256k cart just to handle 64 rotation frames in a circle for a single object. Expanding that out to fully 3D views and rotations gets quite prohibitive.
Haven't tried downloading RGDBS to compile this yet. However, suspect the final file is probably similar, and pushing the upper limits on GB cart sizes.
Well, Cannon Fodder for the GBC it's 1 MB big, and the rest such as Metal Gear and Alone in the Dark are pretty sized for the hardware.
It’s a shader, not a renderer. The images are pre-rendered, but the shading is done in real time.
⇒ I think they’re correct in calling this a 3D shader.
It's not that different from how some creative Mac games were doing 3d lighting on 2d textures prior to 3d accelerated hardware being available. The neat part here is that it runs on a Gameboy Colour.
On a device that apparently doesn't even support floating point operations and doesn't support multiplication. Super cool.
> An overall failed attempt at using AI > I attempted to use AI to try out the process, mostly because 1) the industry won't shut up about AI, and 2) I wanted a grounded opinion of it for novel projects, so I have a concrete and personal reference point when talking about it in the wild. At the end of the day, this is still a hobbyist project, so AI really isn't the point! But still...
> I believe in disclosing all attempts or actual uses of generative AI output, because I think it's unethical to deceive people about the process of your work. Not doing so undermines trust, and amounts to disinformation or plagiarism. Disclosure also invites people who have disagreements to engage with the work, which they should be able to. I'm open to feedback, btw.
Thank you for your honesty! Also tremendous project.
The funny thing is the phrasing used to be more neutral, but I changed the tone to be slightly more skeptical because people thought I was just glazing AI in my post. Another guy on Reddit seemed annoyed that I didn't love AI enough.
I just wanted to document the process for this type of project. shrug
It seems to me that AI is mostly optimized for tricking suits into thinking they don't need people to do actual work. If I hear "you're absolutely right!" one more time my eyes might roll all the way back into my head.
Still, even though they suck at specific artifacts or copy, I've had success asking an LLM to poke for holes in my documentation. Things that need concrete examples, knowledge assumptions I didn't realize I was making, that sort of thing.
Sweet Gameboy shader!
You're absolutely right! (sorry, I couldn't resist)
Just… ignore Reddit.
I dunno about the need for disclosure in this way. In my working life I’ve copied a lot of code from stack overflow, or a forum or something when I’ve been stuck. I’ve understood it (or at least tried to) when implementing it, but I didn’t technically write it. It was never a problem though because everybody did this to some degree and no one would demand others disclose such a thing at least in hobby projects or low stakes professional work (obviously it’s different if you’re making like, autopilot software for a passenger plane or something mission critical, that’s notwithstanding).
If it’s the norm to use LLMs, which I honestly believe is the case now or at least very soon, why disclose the obvious? I’d do it the other way around, if you made it by hand disclose that it was entirely handmade, without any AI or stackoverflow or anything, and we can treat it with respect and ooh and ahh accordingly. But otherwise it’s totally reasonable to assume LLM usage, at the end of the day the developer is still responsible for the final result, how it functions, just like a company is responsible for its products even if they contracted out the development of them. Or how a filmmaker is responsible for how a scene looks even if they used abobe after effects to content aware remove an object.
I disclosed AI because I think it's important to disclose it. I also take pride in the process. Mind you, I also cite Stack Overflow answers in my code if I use it. Usually with a comment like:
With any AI code I use, I adopted this style (at least for now):This GBC shader reveals a key truth: all computation is approximation under constraint. Multiplication becomes table lookups plus addition, while precision yields to what the eye actually sees.
I bow before the master. Genuinely outstanding work.
Since you're already doing what's essentially demoscene-grade hacking, have you thought about putting together a short demo and entering it at a demoparty? There's a list of events at demoparty.net - this kind of thing would absolutely shine there.
I’m incredibly impressed by this, largely because it actually is running on a CGB. What I see often are hacks where the game boy is just being used as a terminal and the cartridge has been packed with far more powerful processing power.
I lowkey wish Nintendo would rerelease the GBC or GBA I would buy one. They can bake in some games into a few cartridges and make it 100% worth the buy too.
You can pick a used one up for pretty cheap. Add a flash cartridge and you're done. I think the cheap android handhelds of the same form factor are a better option though.
I've still got my Gameboy collection, but rarely use it. It's just so much easier to fire up an emulator these days.
I still have my 90s one but would love a modern brand new one, similar to how they did the SNES Mini
You can buy the ModRetro Chromatic from the Oculus VR creator. It's better than anything Nintendo could ever produce.
I seen those but I dont like the asthetic, my GBC from the 90s is dirty but sturdy as heck despite my carelessness through 28 plus years
Doesn’t he use your money to blow people up or something?
No the US government does that. He just takes the money taxpayers gave the government to blow people up with, and unlike the other defense contractors, also indirectly finances the production of gameboys with some of the money. The idea that giving money to ModRetro finances arms is essentially backwards from how money actually flows.
It would be quite hilarious if the game boy knockoff proceeds were funding the defense contract executions.
This is pricey but pretty awesome. Very well built, high quality. Hit up a local used game store and have a more modern hardware experience with legit copies of the actual games.
https://www.analogue.co/pocket
I went with getting a GBA SP and replacing the screen with a more modern panel. The kids love it.
Really interesting Project, thanks for sharing. Reminded me a bit of my time coding assembly on the C64 (yeah, I'm old). For 3D (wire-frame) we also needed to find creative ways around hardware limitations, especially the lack of a multiply instruction.
This is why HN exists, almost gives me the same joy as flipping through tech magazines of yester-decades.
This is the coolest thing I've seen in months. Licence it as beerware, then I'm obliged to owe you one.
The "Making it work" section seems to abruptly end at the following?
Aah you're right. That was from my vomit draft and forgot to tidy it up. I'll update the post soon
Thanks!
Always loved using old hardware with recent understandings.
I don't think there's anything recent here, they are just pre-computing a normal map which doubles as already "baking" a 3D-looking image in.
Was there anything of that sort made during the gbc era on this hardware ? I thought nobody ever attempted it before
Not exactly this, but many "3D games" were pre-computed scenes. The normal map is the novel bit of this demo.
Ok, and at the time, was anybody even thinking about computing normal maps this way on such hardware ? that was my original though, "maybe" this is the result of applying more recent ideas for this group to hardware that wasn't made to support it. But maybe i'm wrong and people did try.
Nice, I’ll have to give this a try on my Analogue Pocket
This author is a psycho. In a good way.
I can’t believe it
Isn't it a bug that when spinning the object the light also spins?
It's the equivalent of spinning the view camera around in the scene. Up / Down spins the light coordinates, Left / Right spins the camera viewpoint.
Probably could have been written that way though, since it is spinning the camera view rather than the object.
nice job !