Reddit is one of the hardest sites to print. The new Reddit UI uses shadow DOM components (shreddit-post) that break standard Ctrl+P. Pretty PDF's Chrome extension preprocesses Reddit's live DOM before capture, serializing shadow DOM content so nothing is lost.
Free — 3 PDFs per month. No credit card required.
Reddit's 2023 redesign moved the entire post rendering system to Web Components with shadow DOM encapsulation. The core element, shreddit-post, wraps each post's title, body, voting controls, and metadata inside a shadow root. Shadow roots are intentionally walled off from the rest of the page — that is their purpose in the Web Components spec. But it also means that the browser's built-in print function cannot see inside them.
When you press Ctrl+P on a Reddit post, the browser's print engine traverses the document tree and formats what it finds. It finds the outer <shreddit-post> custom element, but the actual content — the post title, the body text, the images — lives inside the shadow root, which the print engine skips entirely. The result is a mostly blank page, or a page that shows only the elements that exist outside the shadow boundary, like the subreddit header and sidebar.
Dynamic loading compounds the problem. Reddit renders comments lazily as you scroll down the page. If you haven't scrolled to the bottom of a long comment thread, those comments do not exist in the DOM yet. Printing the page captures only the comments that were loaded at the moment you hit print — which could be as few as the first ten or twenty top-level replies in a thread with hundreds.
Even on the comments that are loaded, the print output is cluttered with interactive elements that make no sense in a static document. Voting arrows (both upvote and downvote), award icons, share buttons, save buttons, hide buttons, report buttons, "give award" modals, the "more replies" expansion links, user flair, and sidebar widgets all end up in the PDF. The actual discussion content — the thing you're trying to save — is buried under layers of interface chrome.
Pretty PDF's extension-side preprocessor solves this by running before the page capture. The preprocessor walks the shadow DOM tree of every shreddit-post element on the page, reads the content from inside each shadow root, and serializes it into regular HTML elements that can be captured normally. The serialized HTML is then sent to the server, where the Reddit parser extracts the meaningful content — post body, author, subreddit, comments — and discards the UI elements. The result is a clean PDF that contains the full post and discussion without any of the interface noise.
The Reddit parser captures the content you care about and strips away everything else.
The full post content is extracted — text, inline images, outbound links, and any formatting applied by the author. Self-posts with long-form text, link posts with commentary, and crossposts are all handled.
The post author's username, the subreddit name, and the post timestamp are captured as metadata. This provides context for the content and makes the PDF useful as a reference document.
Comment threads are captured with their hierarchical nesting intact. Top-level comments, nested replies, and deeply threaded conversations are visually indented in the PDF so the discussion structure is immediately clear.
Images are embedded directly into the PDF, including gallery posts with multiple photos. Video posts include a thumbnail frame. External link previews with their thumbnail images are preserved.
Post flair labels, content tags, and metadata such as the post score and comment count are included. This contextual information helps identify the post's topic and significance when reviewing the PDF later.
The parser removes everything that serves no purpose in a static document: voting arrows (upvote and downvote buttons), award icons and the "give award" dialog, the sidebar with subreddit rules and community info, promoted posts and advertisements, "more replies" expansion links, and the share, save, hide, and report buttons. The result is a PDF that reads like an article, not a screenshot of a web application.
From any Reddit post to a clean PDF in under ten seconds.
Open the Reddit post you want to save. It can be a text post, image post, link post, or a specific comment thread. Works on both new Reddit and old.reddit.com.
When Pretty PDF detects you are on Reddit, the extension's site preprocessor automatically walks the shadow DOM tree. Every shreddit-post element is serialized into standard HTML that the server can read.
Click the Pretty PDF icon in your toolbar. The popup shows that Reddit has been detected. Select one of five templates — Clean and Minimal work particularly well for discussion threads.
The server receives the serialized HTML and activates the Reddit parser. The parser strips voting controls, sidebar widgets, ads, and navigation — then extracts the post content, metadata, and comments.
Your template is applied with embedded fonts, and the PDF is generated by WeasyPrint. The result is a clean, readable document with the full post and thread structure preserved.
Reddit's user interface exists in two very different forms, and Pretty PDF handles both. The extension automatically detects which version you are viewing and adjusts its approach accordingly.
Old Reddit (old.reddit.com) uses traditional server-rendered HTML. Posts, comments, and metadata are all present in the standard DOM as regular HTML elements. There is no shadow DOM, no Web Components, and no dynamic content loading for the initial page view. This makes old Reddit straightforward to parse — the extension does not need to perform any DOM preprocessing. The HTML is captured as-is and sent to the server, where the Reddit parser extracts the content. Old Reddit works at full fidelity with no preprocessing required.
New Reddit uses a modern component-based architecture built on Web Components. The shreddit-post element and related custom elements render their content inside shadow roots. Without preprocessing, the content inside those shadow roots is invisible to any capture mechanism that reads the standard DOM. The extension's site preprocessor is essential here — it walks the shadow DOM tree, reads the content from inside each shadow root, and serializes it into regular HTML. This serialized version is then sent to the server for parsing.
The output quality is identical regardless of which version you start from. Both paths produce the same clean PDF with the post title, body, author info, subreddit metadata, and threaded comments. The difference is only in how the content is captured — old Reddit is a simple DOM read, while new Reddit requires the additional shadow DOM serialization step. The extension handles this detection and preprocessing transparently, so you do not need to think about which version you are on.
If you prefer old Reddit for daily browsing, Pretty PDF works without any extra steps. If you use new Reddit (the default for most users), the extension's preprocessor runs automatically in the background before capture. Either way, the PDF comes out clean.
shreddit-post, renders its content inside a shadow root that is invisible to the browser's built-in print engine. When you press Ctrl+P, the browser only sees the outer custom element — not the actual post text, images, or metadata inside it. On top of that, Reddit loads content dynamically as you scroll, so comments further down the thread may not exist in the DOM at all. Pretty PDF's extension-side preprocessor solves this by walking the shadow DOM tree and serializing its contents into regular HTML before sending it to the server./comments/abc123/title/comment_id/), Pretty PDF will treat that comment and its direct replies as the primary content. This is useful when you want to save a particularly insightful answer or a specific branch of a discussion without capturing the entire thread.Free tier, no credit card. 3 PDFs per month with all templates included.
Install Free Extension