Platform Guide

How to save a Reddit post as a PDF

Reddit is one of the hardest sites to print. The new Reddit UI uses shadow DOM components (shreddit-post) that break standard Ctrl+P. Pretty PDF's Chrome extension preprocesses Reddit's live DOM before capture, serializing shadow DOM content so nothing is lost.

Free — 3 PDFs per month. No credit card required.

Why Reddit is hard to print

Reddit's 2023 redesign moved the entire post rendering system to Web Components with shadow DOM encapsulation. The core element, shreddit-post, wraps each post's title, body, voting controls, and metadata inside a shadow root. Shadow roots are intentionally walled off from the rest of the page — that is their purpose in the Web Components spec. But it also means that the browser's built-in print function cannot see inside them.

When you press Ctrl+P on a Reddit post, the browser's print engine traverses the document tree and formats what it finds. It finds the outer <shreddit-post> custom element, but the actual content — the post title, the body text, the images — lives inside the shadow root, which the print engine skips entirely. The result is a mostly blank page, or a page that shows only the elements that exist outside the shadow boundary, like the subreddit header and sidebar.

Dynamic loading compounds the problem. Reddit renders comments lazily as you scroll down the page. If you haven't scrolled to the bottom of a long comment thread, those comments do not exist in the DOM yet. Printing the page captures only the comments that were loaded at the moment you hit print — which could be as few as the first ten or twenty top-level replies in a thread with hundreds.

Even on the comments that are loaded, the print output is cluttered with interactive elements that make no sense in a static document. Voting arrows (both upvote and downvote), award icons, share buttons, save buttons, hide buttons, report buttons, "give award" modals, the "more replies" expansion links, user flair, and sidebar widgets all end up in the PDF. The actual discussion content — the thing you're trying to save — is buried under layers of interface chrome.

Pretty PDF's extension-side preprocessor solves this by running before the page capture. The preprocessor walks the shadow DOM tree of every shreddit-post element on the page, reads the content from inside each shadow root, and serializes it into regular HTML elements that can be captured normally. The serialized HTML is then sent to the server, where the Reddit parser extracts the meaningful content — post body, author, subreddit, comments — and discards the UI elements. The result is a clean PDF that contains the full post and discussion without any of the interface noise.

Reddit parser

What Pretty PDF extracts from Reddit

The Reddit parser captures the content you care about and strips away everything else.

Post title and body

The full post content is extracted — text, inline images, outbound links, and any formatting applied by the author. Self-posts with long-form text, link posts with commentary, and crossposts are all handled.

Author and subreddit info

The post author's username, the subreddit name, and the post timestamp are captured as metadata. This provides context for the content and makes the PDF useful as a reference document.

Comment threads

Comment threads are captured with their hierarchical nesting intact. Top-level comments, nested replies, and deeply threaded conversations are visually indented in the PDF so the discussion structure is immediately clear.

Embedded media

Images are embedded directly into the PDF, including gallery posts with multiple photos. Video posts include a thumbnail frame. External link previews with their thumbnail images are preserved.

Flair and post metadata

Post flair labels, content tags, and metadata such as the post score and comment count are included. This contextual information helps identify the post's topic and significance when reviewing the PDF later.

The parser removes everything that serves no purpose in a static document: voting arrows (upvote and downvote buttons), award icons and the "give award" dialog, the sidebar with subreddit rules and community info, promoted posts and advertisements, "more replies" expansion links, and the share, save, hide, and report buttons. The result is a PDF that reads like an article, not a screenshot of a web application.

Five steps

How it works on Reddit

From any Reddit post to a clean PDF in under ten seconds.

1

Navigate to any Reddit post

Open the Reddit post you want to save. It can be a text post, image post, link post, or a specific comment thread. Works on both new Reddit and old.reddit.com.

2

Extension preprocesses the DOM

When Pretty PDF detects you are on Reddit, the extension's site preprocessor automatically walks the shadow DOM tree. Every shreddit-post element is serialized into standard HTML that the server can read.

3

Click the icon and choose a template

Click the Pretty PDF icon in your toolbar. The popup shows that Reddit has been detected. Select one of five templates — Clean and Minimal work particularly well for discussion threads.

4

Content is extracted and cleaned

The server receives the serialized HTML and activates the Reddit parser. The parser strips voting controls, sidebar widgets, ads, and navigation — then extracts the post content, metadata, and comments.

5

Download your PDF

Your template is applied with embedded fonts, and the PDF is generated by WeasyPrint. The result is a clean, readable document with the full post and thread structure preserved.

Old Reddit vs new Reddit

Reddit's user interface exists in two very different forms, and Pretty PDF handles both. The extension automatically detects which version you are viewing and adjusts its approach accordingly.

Old Reddit (old.reddit.com) uses traditional server-rendered HTML. Posts, comments, and metadata are all present in the standard DOM as regular HTML elements. There is no shadow DOM, no Web Components, and no dynamic content loading for the initial page view. This makes old Reddit straightforward to parse — the extension does not need to perform any DOM preprocessing. The HTML is captured as-is and sent to the server, where the Reddit parser extracts the content. Old Reddit works at full fidelity with no preprocessing required.

New Reddit uses a modern component-based architecture built on Web Components. The shreddit-post element and related custom elements render their content inside shadow roots. Without preprocessing, the content inside those shadow roots is invisible to any capture mechanism that reads the standard DOM. The extension's site preprocessor is essential here — it walks the shadow DOM tree, reads the content from inside each shadow root, and serializes it into regular HTML. This serialized version is then sent to the server for parsing.

The output quality is identical regardless of which version you start from. Both paths produce the same clean PDF with the post title, body, author info, subreddit metadata, and threaded comments. The difference is only in how the content is captured — old Reddit is a simple DOM read, while new Reddit requires the additional shadow DOM serialization step. The extension handles this detection and preprocessing transparently, so you do not need to think about which version you are on.

If you prefer old Reddit for daily browsing, Pretty PDF works without any extra steps. If you use new Reddit (the default for most users), the extension's preprocessor runs automatically in the background before capture. Either way, the PDF comes out clean.

Frequently asked questions

Reddit's redesigned interface uses Web Components with shadow DOM encapsulation. The primary post element, shreddit-post, renders its content inside a shadow root that is invisible to the browser's built-in print engine. When you press Ctrl+P, the browser only sees the outer custom element — not the actual post text, images, or metadata inside it. On top of that, Reddit loads content dynamically as you scroll, so comments further down the thread may not exist in the DOM at all. Pretty PDF's extension-side preprocessor solves this by walking the shadow DOM tree and serializing its contents into regular HTML before sending it to the server.
Yes. Pretty PDF's Reddit parser extracts comment threads and preserves their hierarchical indentation so you can follow the conversation structure in the PDF. Top-level comments, nested replies, and deeply threaded discussions are all captured with proper visual nesting. The voting arrows, collapse toggles, and "more replies" buttons are stripped out, leaving just the comment text, author names, and thread structure.
Yes. The extension auto-detects whether you are viewing old Reddit or new Reddit. Old Reddit uses standard server-rendered HTML without shadow DOM, which makes it straightforward to parse — the extension does not need to perform any DOM preprocessing. New Reddit requires the full shadow DOM serialization pipeline. Both versions produce the same clean output: post content, metadata, and optionally comments, without UI chrome or navigation elements.
If you navigate directly to a permalink for a specific comment (the URL that looks like /comments/abc123/title/comment_id/), Pretty PDF will treat that comment and its direct replies as the primary content. This is useful when you want to save a particularly insightful answer or a specific branch of a discussion without capturing the entire thread.
Image posts are fully supported — the parser extracts and embeds images directly into the PDF, including gallery posts with multiple photos. For video posts, Pretty PDF captures a thumbnail frame along with the post title and any accompanying text. Since PDFs are a static format, video playback is not possible, but the thumbnail provides visual context alongside the discussion content.

Save your next Reddit post as a clean PDF

Free tier, no credit card. 3 PDFs per month with all templates included.

Install Free Extension