Yes! Just started reading the table of contents, and already I'm feeling that joy of old-school creative computing. Revival of the culture of personal computers and programming as a technology of liberation. A better future is possible and the power is in our hands.
I like this magazine vibe, it reminds me of the good ol' l33t zines from the late '80s and '90s. However, if I can offer a suggestion, I'd also pair the technical articles with a little more punky, down-to-earth stuff. They were cheerful, informal, and full of that cheeky, irreverent, cocky smart-ass humor, plus this mysterious edge that made them absolutely magnetic to me. Life just wasn’t so heavy back then.
Thanks for the suggestion! I wouldn't mind having such articles in PO! tbh - let me think what can we do about it (or rather: let me pass this to the rest of the team so they think about it too).
I think we should try. We'll have to figure out how to mark it so that it's clear it's a work of fiction, though on the other hand it might be obvious. Anyway, submit it :)
I still have my Mondo 2000 zine. It was literally a futurist guidebook for cyberpunk of today.
Better living through chemistry, memes, cybernetics were all predicted by Mondo.
Wow cool. I have not heard of Mondo 2000 reading hn for almost 20 years. And did not realize Boing Boing was so old. Makes me wonder what else existed.
My family had a bunch of "Dr. Dobb’s Journal of Computer Calisthenics & Orthodontia"[0] and similar things (BYTE, COMPUTE!). (Which seem slightly dryer, but maybe more like Paged Out.)
that's the point! we got so concerned with creating a safe space for everyone that can't possible offend we lost site of the community building intent. The crux is to have people self-select without offending them, but IMO it's not a binary goal.
It is a double edged sword of the single page layout that you really have to make one point briefly and get out of there. I had to pare down many details to fit the layout.
If you want to learn how to implement a query based compiler, I have a tutorial on that here: https://thunderseethe.dev/posts/lsp-base/ (which I also highly recommend but that might be more obvious since I wrote it)
Thanks! I also told Aga via email in the thread where I submitted my article.
Worth noting that the HTML tag in the title was stripped from the PDF table of contents as well, so the title for that article in the contents is missing a word. No big deal, but good to know for future submissions!
A couple of the stories where I feel I have expertise I found to be a bit objectionable. The title/headline was some clever or unexpected thing, but upon reading it turns out there is nothing supporting the headline.
E.g. "Integer Comparison is not Deterministic", in the C standard you can't do math on pointers from different allocations. The result in the article is obvious if you know that.
Also, in the Logistic Map in 8-Bit. There is a statement
> While implementing Algorithm 1 in modern systems is trivial, doing so in earlier computers and languages was not so straightforward.
Microsoft BASIC did floating point. Every 8-bit of the era was able to do this calculation easily. I did it on my Franklin ACE 1000 in 1988 in basic while reading the book Chaos.
I suppose what I'm saying is the premise of the articles seem to be click-baity and I find that off putting.
In general when selecting articles we assume that the reader is an expert in some field(s), but not necessarily in the field covered by this article. As such, things which are simple for an expert in the specific domain, can still be surprisingly to learn for folks who aren't experts in that domain.
What I'm saying is, that we don't try to be a cutting edge scientific journal — rather than that, we publish even the smallest trick that we decide someone may not know about and find it fun/interesting to learn.
The consequence of that is that, yeah, some article have a bit clickbaity titles for some of the readers.
On the flip side, as we know from meme-t-shirts, there are only 2 things hard in computer science, and naming is first on the list ;)
P.S. Sounds like you should write some cool article btw :)
For what it's worth, I am only a mid-tier nerd and after reading this issue, I feel like I am your target audience. Nothing deep or overly-detailed, just lots of jumping-off points for me to learn more. Thanks!
I noticed that as well. Also misleading titles like “Eliminating Serialization Cost using B-trees” where the cost savings are actually for deserialization (from a custom format), and neither the self-balancing nature of B-trees isn’t actually relevant, as no insertion/deletion of nodes occurs in the (de)serialization scenario, so a single tree level is sufficient. It’s a stretch to refer to it as a B-tree.
I don't think that's fully accurate (full-disclosure: I've done the technical review for this article).
First, as for "serialization" vs "deserialization", it can be argued that the word "serialization" can be used in two ways. One is on the "low level" to denote the specific action of taking the data and serializing it. The other one is "high level", where it's just a bag where you throw in anything related (serialization, deserialization, protocols, etc) - same as it's done on Wikipedia: https://en.wikipedia.org/wiki/Serialization (note how the article is not called "Serialization and deserialization" for exactly these reasons). So yes, you can argue that the author could have written "deserialization", but you can also argue that the author used the "high level" interpretation of the word and therefore used it correctly.
As for insertion not happening and balancing stuff - my memory might be failing me, but I do remember it actually happening during serialization. I think there even was a "delete" option when constructing the "serialized buffer", but it had interesting limitations.
Anyway, not sure how deep did you go into how it works (beyond what's in the article), but it's a pretty cool and clever piece of work (and yes, it does have its limitations, but also I can see this having its applications - e.g. when sending data from a more powerful machine to a tiny embedded one).
Oh my goodness, they're still doing the radio shows as well.
I was an avid follower of 2600, phrack, etc from the mid 90's up through the mid 2010s and it seemed to me that the 2600 community always sort of stuck to itself, never really growing or shrinking.
2600 is locked into a format that was relevant 30-40 years ago and is nearly irrelevant today. In my opinion, 2600 is pantomiming a hacker aesthetic and have long since abandoned any commitment to an underlying hacker ethos.
I'm surprised that they're now offering a digital format as, at one point, they were taking a hard stance to not provide one. I guess they changed their mind within the last 10 years or so.
Notice how Paged Out is libre/free licensed, making sure that they provide a CC0, CC-BY or CC-BY-SA for their articles. 2600 is locked under copyright.
> Obviously the used fonts should be readable (and ideally their name shouldn't start with "Comic" and end with "Sans", though there might be some article topics that justify even that!), and while almost any font meets this requirement, please be careful when selecting a non-standard font.
I kinda want to see such an article, but taken seriously discussing the history of the font, its design and purpose, evolution, and purpose-related/derivative font families.
I thought about it (quite a lot actually), but eventually came to conclusion that this would end up being misleading and widely misinterpreted.
For example, one could argue that running a modern grammar checker over an article and based on that doing comma fixes should already be marked with "AI was used to create this article". But reading a statement like that makes folks think "AI slop", which would not be the case at all and would be insanely unfair towards the author. Even creating a scale of "no AI was used at all" → "a bit was used" → "..." wouldn't solve the misinterpretation issue, because regardless of how well we would define the scale, I have zero hope that more than a handful of people would ever read our definitions (and understand them the way we intended) posted somewhere on our website (or even in the zine itself).
Another example would be someone doing research for their article and using AI as a search engine (to get leads on what more to read on the topic). On one hand this is AI usage, on another it's pretty similar to just using a classical search engine. Yet still someone could argue that the article should be marked as "being AI enhanced".
There are also more popular use-cases for AIs, like just doing wordsmithing/polishing the language. A great majority of authors (including me) are not native English speakers, yet folks do want their articles to present well (some readers are pretty unforgiving when it comes to typos and grammar errors). LLMs are (if used correctly) good tools to help with the language layer. So, should an article where the author has written everything themselves and then used AI to polish it be grouped in the same bag with fully AI generated slop? From my PoV the answer is a pretty clear "no".
Anyway, at the end of the day I decided any kind of markings on the article won't work in an intended way, and outright banning any and all AI usage won't work either (would be hard to detect / there is no sense in some cases / there are reasons to allow some AI usage). But - as you know, since you refer to our AI policy - I still decided we draw a line in a kind of similar place to where some universities draw it.
I took a peak at "Compiler Education Deserves a Revolution" and thought, wtf is this talking about?
It claims clang is NOT "a pipeline that runs each pass of the compiler over your entire code before shuffling its output along to the next pass."
What I think the author is talking about is primarily AST parsing and clangd, where as "any compiler tome" is still highly relevant to the actual work of building a compiler.
Yeah I was just wrong here. I was under the impression clang had a concept of a request the same way Swiftc does and that is just not true. That's my bad!
Related search terms are incremental compilation and red-green trees. It's primarily an ide driven workflow (well, the original use case was driven by ides), but the principles behind it are very interesting.
You can grok the difference by thinking through, for example, the difference between invoking `g++` on the command line - include all headers, then compile object files via includes, re-do all template deduction, etc. and one where editing a single line in a single file doesn't change the entire data structure much and force entire recompilation (this doesn't need full ownership of editing either by hooking UI events or keylogging: have a directory watcher treat the file diff as a patch, and then send it to the server in patch form; the observation being that compiling an O(n) size file is often way more expensive than a program that goes through the entire file a few times and generates a patch)
AST's are similar to these kinds of trees only insofar as the underlying data structure to understand programming languages are syntax trees.
I've always wanted to get into this stuff but it's hard!
OK, but that is distinctly NOT what clang does... incremental compilation with clang is handled at the build system level. I can't speak for rustc, but I do know that it typically ends up going through llvm, which, contrary to the author's claims, is exactly a pipeline.
The very first sentence is: "Hi, here’s the bot-in-chief, Aga, with a little foreword."
Am I to understand that Aga is an AI bot? I see nothing mentioned about this in the FAQs or the webpage. Makes me wonder if this zine may be written by AI agents reproducing the old hacker magazine aesthetic.
Or is "bot-in-chief" some kind of tongue-in-cheek formulation that I can find nothing about online? Aga is listed as "Editor-in-Chief" on the About page.
Yes! Just started reading the table of contents, and already I'm feeling that joy of old-school creative computing. Revival of the culture of personal computers and programming as a technology of liberation. A better future is possible and the power is in our hands.
yes!
I like this magazine vibe, it reminds me of the good ol' l33t zines from the late '80s and '90s. However, if I can offer a suggestion, I'd also pair the technical articles with a little more punky, down-to-earth stuff. They were cheerful, informal, and full of that cheeky, irreverent, cocky smart-ass humor, plus this mysterious edge that made them absolutely magnetic to me. Life just wasn’t so heavy back then.
Thanks for the suggestion! I wouldn't mind having such articles in PO! tbh - let me think what can we do about it (or rather: let me pass this to the rest of the team so they think about it too).
I like this format and I also like irreverent humor like the BOFH chronicles... can I submit content for consideration in the next issue ?
I think we should try. We'll have to figure out how to mark it so that it's clear it's a work of fiction, though on the other hand it might be obvious. Anyway, submit it :)
Also, how about 1 page tech-related entertaining short stories?
like Mondo 2000 :)
I still have my Mondo 2000 zine. It was literally a futurist guidebook for cyberpunk of today. Better living through chemistry, memes, cybernetics were all predicted by Mondo.
Wow cool. I have not heard of Mondo 2000 reading hn for almost 20 years. And did not realize Boing Boing was so old. Makes me wonder what else existed.
My family had a bunch of "Dr. Dobb’s Journal of Computer Calisthenics & Orthodontia"[0] and similar things (BYTE, COMPUTE!). (Which seem slightly dryer, but maybe more like Paged Out.)
[0]:https://archive.org/details/dr_dobbs_journal_vol_01/mode/2up
TIL :D
Sadly I don't know if that kind of 80s/90s irreverence would go well with today's sensitivities.
that's the point! we got so concerned with creating a safe space for everyone that can't possible offend we lost site of the community building intent. The crux is to have people self-select without offending them, but IMO it's not a binary goal.
> Query based compilers are all the rage: Rust, Swift, Kotlin, Haskell, and Clang all structure their compilers as queries.
I've never heard of this. It's a pity the article doesn't go into details.
It is a double edged sword of the single page layout that you really have to make one point briefly and get out of there. I had to pare down many details to fit the layout.
If you want to learn more about query based compilers as a concept, I highly recommend ollef's aritcle: https://ollef.github.io/blog/posts/query-based-compilers.htm...
If you want to learn how to implement a query based compiler, I have a tutorial on that here: https://thunderseethe.dev/posts/lsp-base/ (which I also highly recommend but that might be more obvious since I wrote it)
Old discussion: https://news.ycombinator.com/item?id=23644391
Finding this one-page was great! It gave me a new term I didn't have before that leads to all sorts of new materials to go rifling through.
They've got a new web viewer in this issue that can be used to link to individual articles and might be nicer than reading a PDF on some screens: https://pagedout.institute/webview.php?issue=8&page=1
The article I submitted has an HTML tag in the title, and seems to have broken the web viewer :(
Note that you can link to pages in a PDF with a hash like #page=64 (for example) in the URL.
https://pagedout.institute/download/PagedOut_008.pdf#page=64
Whoops. Looking into it.
EDIT: Fixed. It wasn't the tags - it was a trailing space we had in the "database". I honestly though I've handled that case, but apparently not .
Thanks! I also told Aga via email in the thread where I submitted my article.
Worth noting that the HTML tag in the title was stripped from the PDF table of contents as well, so the title for that article in the contents is missing a word. No big deal, but good to know for future submissions!
This goes to the "fix me" list. We're planning a rebuild in the next few days anyway, so it should get fixed then.
This is really nice but I see this works only for the current issue i.e. #8 and not for previous issues.
Still would like a straight html version for reading on a phone. One with resizable text and proper reflow.
This! Pdf is nice, but not on a slow device/connection.
A couple of the stories where I feel I have expertise I found to be a bit objectionable. The title/headline was some clever or unexpected thing, but upon reading it turns out there is nothing supporting the headline.
E.g. "Integer Comparison is not Deterministic", in the C standard you can't do math on pointers from different allocations. The result in the article is obvious if you know that.
Also, in the Logistic Map in 8-Bit. There is a statement
> While implementing Algorithm 1 in modern systems is trivial, doing so in earlier computers and languages was not so straightforward.
Microsoft BASIC did floating point. Every 8-bit of the era was able to do this calculation easily. I did it on my Franklin ACE 1000 in 1988 in basic while reading the book Chaos.
I suppose what I'm saying is the premise of the articles seem to be click-baity and I find that off putting.
You're right.
In general when selecting articles we assume that the reader is an expert in some field(s), but not necessarily in the field covered by this article. As such, things which are simple for an expert in the specific domain, can still be surprisingly to learn for folks who aren't experts in that domain.
What I'm saying is, that we don't try to be a cutting edge scientific journal — rather than that, we publish even the smallest trick that we decide someone may not know about and find it fun/interesting to learn.
The consequence of that is that, yeah, some article have a bit clickbaity titles for some of the readers.
On the flip side, as we know from meme-t-shirts, there are only 2 things hard in computer science, and naming is first on the list ;)
P.S. Sounds like you should write some cool article btw :)
For what it's worth, I am only a mid-tier nerd and after reading this issue, I feel like I am your target audience. Nothing deep or overly-detailed, just lots of jumping-off points for me to learn more. Thanks!
I noticed that as well. Also misleading titles like “Eliminating Serialization Cost using B-trees” where the cost savings are actually for deserialization (from a custom format), and neither the self-balancing nature of B-trees isn’t actually relevant, as no insertion/deletion of nodes occurs in the (de)serialization scenario, so a single tree level is sufficient. It’s a stretch to refer to it as a B-tree.
I don't think that's fully accurate (full-disclosure: I've done the technical review for this article).
First, as for "serialization" vs "deserialization", it can be argued that the word "serialization" can be used in two ways. One is on the "low level" to denote the specific action of taking the data and serializing it. The other one is "high level", where it's just a bag where you throw in anything related (serialization, deserialization, protocols, etc) - same as it's done on Wikipedia: https://en.wikipedia.org/wiki/Serialization (note how the article is not called "Serialization and deserialization" for exactly these reasons). So yes, you can argue that the author could have written "deserialization", but you can also argue that the author used the "high level" interpretation of the word and therefore used it correctly.
As for insertion not happening and balancing stuff - my memory might be failing me, but I do remember it actually happening during serialization. I think there even was a "delete" option when constructing the "serialized buffer", but it had interesting limitations.
Anyway, not sure how deep did you go into how it works (beyond what's in the article), but it's a pretty cool and clever piece of work (and yes, it does have its limitations, but also I can see this having its applications - e.g. when sending data from a more powerful machine to a tiny embedded one).
I enjoyed the reward-hacking one: https://pagedout.institute/download/PagedOut_008.pdf#page=59
I have the printed versions of issue #6 and #7, I highly recommend them!
https://www.lulu.com/spotlight/pagedout
I love Paged Out -- it's basically the only modern equivalent to 1980s BYTE or Dr. Dobbs Journal today.
There's also Proof Of Concept Or GTFO edited by Pastor Manuel LaPhroaig https://github.com/angea/pocorgtfo
Boy, PoC||GTFO is my favorite "magazine".
No, not giving spoilers except there might be some polyglot files.
I can highly recommend buying these printed in the "bible style" binding with finger cutouts, ribbon bookmark and everything.
https://nostarch.com/gtfo3
I'll seriously consider this, thanks. The issue is not the price, but physical space.
Awesome! Was looking forward to the next issue. Paged Out reminds me a lot of the old-school 2600 Hacker Quarterly periodical back in the 80s.
https://en.wikipedia.org/wiki/2600:_The_Hacker_Quarterly
2600 is still being published!
https://www.2600.com/Magazine/DigitalEditions
Oh my goodness, they're still doing the radio shows as well.
I was an avid follower of 2600, phrack, etc from the mid 90's up through the mid 2010s and it seemed to me that the 2600 community always sort of stuck to itself, never really growing or shrinking.
Has the quality declined over the years?
I get the 2600 zine at a local book store and I like it but there's a lot of articles that I don't really care about.
It might be a good thing though.
2600 is locked into a format that was relevant 30-40 years ago and is nearly irrelevant today. In my opinion, 2600 is pantomiming a hacker aesthetic and have long since abandoned any commitment to an underlying hacker ethos.
I'm surprised that they're now offering a digital format as, at one point, they were taking a hard stance to not provide one. I guess they changed their mind within the last 10 years or so.
Notice how Paged Out is libre/free licensed, making sure that they provide a CC0, CC-BY or CC-BY-SA for their articles. 2600 is locked under copyright.
[re: page 40 NTP-over-HTTP] ooh i've heard of this! it's being used in real life by Whonix (sdwdate) and Tails (tails-htp/htpdate)
https://www.kicksecure.com/wiki/Sdwdate https://tails.net/contribute/design/Time_syncing/
From PO's submission FAQs and policies:
> Obviously the used fonts should be readable (and ideally their name shouldn't start with "Comic" and end with "Sans", though there might be some article topics that justify even that!), and while almost any font meets this requirement, please be careful when selecting a non-standard font.
I kinda want to see such an article, but taken seriously discussing the history of the font, its design and purpose, evolution, and purpose-related/derivative font families.
Thank you. I love the wallpapers of Paged Out and always set it as my default wallpaper on MacOS.
I feel like this tweet suggests that the PDF is a polyglot or an embedded second PDF.
https://x.com/gynvael/status/2024180784064598134
Initial impressions says no about being that file a polyglot.
If you like polyglot files, see https://www.alchemistowl.org/pocorgtfo/
PoC||GTFO is the GOAT
Oh yeah. I have the paperback 'bible'. I don't think that that one is a polyglot, though.
Can’t you use the tome as a cluebat?
I believe it’s a dual use tool, hence a polyglot.
Ah, no, sorry, no polyglots there yet. We'll get there one day, but so far our tooling doesn't allow for it yt.
Ah! I thought your wording was a hint (it's the viewer that thinks it's only 92 pages).
Some nice art in there too.
I love it! I appreciate your AI policy, although I wish it required whether each article has been AI enhanced
I thought about it (quite a lot actually), but eventually came to conclusion that this would end up being misleading and widely misinterpreted.
For example, one could argue that running a modern grammar checker over an article and based on that doing comma fixes should already be marked with "AI was used to create this article". But reading a statement like that makes folks think "AI slop", which would not be the case at all and would be insanely unfair towards the author. Even creating a scale of "no AI was used at all" → "a bit was used" → "..." wouldn't solve the misinterpretation issue, because regardless of how well we would define the scale, I have zero hope that more than a handful of people would ever read our definitions (and understand them the way we intended) posted somewhere on our website (or even in the zine itself).
Another example would be someone doing research for their article and using AI as a search engine (to get leads on what more to read on the topic). On one hand this is AI usage, on another it's pretty similar to just using a classical search engine. Yet still someone could argue that the article should be marked as "being AI enhanced".
There are also more popular use-cases for AIs, like just doing wordsmithing/polishing the language. A great majority of authors (including me) are not native English speakers, yet folks do want their articles to present well (some readers are pretty unforgiving when it comes to typos and grammar errors). LLMs are (if used correctly) good tools to help with the language layer. So, should an article where the author has written everything themselves and then used AI to polish it be grouped in the same bag with fully AI generated slop? From my PoV the answer is a pretty clear "no".
Anyway, at the end of the day I decided any kind of markings on the article won't work in an intended way, and outright banning any and all AI usage won't work either (would be hard to detect / there is no sense in some cases / there are reasons to allow some AI usage). But - as you know, since you refer to our AI policy - I still decided we draw a line in a kind of similar place to where some universities draw it.
Thank you for the thoughtful reply. I do love the pub - both the approach and the content!
It has a little bit of a "2600 vibe" but with a more modern look and feel. This is the first issue I've read, and I like it.
I enjoyed writing an article for this issue.
I highly recommend it if you enjoy writing. It was painless and fun.
A nice break from writing blogs.
It's a great day every time one of these hits the RSS reader. Great work as always Paged Out team!
this is absolutely magnificent, and exactly the kind of thing i wish there were more of in the world.
I took a peak at "Compiler Education Deserves a Revolution" and thought, wtf is this talking about?
It claims clang is NOT "a pipeline that runs each pass of the compiler over your entire code before shuffling its output along to the next pass."
What I think the author is talking about is primarily AST parsing and clangd, where as "any compiler tome" is still highly relevant to the actual work of building a compiler.
Yeah I was just wrong here. I was under the impression clang had a concept of a request the same way Swiftc does and that is just not true. That's my bad!
https://learn.microsoft.com/en-us/shows/seth-juarez/anders-h...
https://news.ycombinator.com/item?id=11685317
https://lobste.rs/s/dwf2yn/sixten_s_query_based_compiler
https://ericlippert.com/2012/06/08/red-green-trees/
Rust's salsa, etc.
Related search terms are incremental compilation and red-green trees. It's primarily an ide driven workflow (well, the original use case was driven by ides), but the principles behind it are very interesting.
You can grok the difference by thinking through, for example, the difference between invoking `g++` on the command line - include all headers, then compile object files via includes, re-do all template deduction, etc. and one where editing a single line in a single file doesn't change the entire data structure much and force entire recompilation (this doesn't need full ownership of editing either by hooking UI events or keylogging: have a directory watcher treat the file diff as a patch, and then send it to the server in patch form; the observation being that compiling an O(n) size file is often way more expensive than a program that goes through the entire file a few times and generates a patch)
AST's are similar to these kinds of trees only insofar as the underlying data structure to understand programming languages are syntax trees.
I've always wanted to get into this stuff but it's hard!
OK, but that is distinctly NOT what clang does... incremental compilation with clang is handled at the build system level. I can't speak for rustc, but I do know that it typically ends up going through llvm, which, contrary to the author's claims, is exactly a pipeline.
Just learned about the project, looks really interesting.
really grateful for everyone that contributes to this zine!
some of this articles I wish I could read more (i.e IDA Database) :)
Creative Computing BYTE MICRO Nibble Dr. Dobb’s Journal Compute! InfoWorld
So great to find that spirit again!
Love the aesthetic, love the idea. Am too stupid to read it.
The vibe is flawless.
This is so awesome, do you have a mailing list, RSS, etc?
They have both, see the bottom of the home page: https://pagedout.institute/
[dead]
The very first sentence is: "Hi, here’s the bot-in-chief, Aga, with a little foreword."
Am I to understand that Aga is an AI bot? I see nothing mentioned about this in the FAQs or the webpage. Makes me wonder if this zine may be written by AI agents reproducing the old hacker magazine aesthetic.
Or is "bot-in-chief" some kind of tongue-in-cheek formulation that I can find nothing about online? Aga is listed as "Editor-in-Chief" on the About page.
Haha don't worry, Aga is a human, and so is the rest of the crew :). It's just an internal joke from simpler times.
I see. Thanks for your answer.