Quantum Notes.md
arxiv.org/2026/...
GPT_Research.pdf
Chat: Thesis Q&A
Lab Report Draft.md
wikipedia.org/wiki...
Supervisor Meeting
Entanglement.docx
Literature Review
youtube.com/watch...
Field Notes
Nature_Paper.pdf
๐Ÿ“

Quantum Physics Research

5 sources ยท 3 types ยท Ready to chat

Stop tab-[hopping].

Start commanding.

Notes, documents, URLs, research reports, and conversations โ€” one Collection, one AI chat, every answer.

Start Researching FreeNo credit card required
Five Source Types

Everything your research touches.
In one place.

Notes

Live references, never stale

Notes added to Collections are live โ€” edit the note anywhere and the collection always has the latest version. Backlink-aware suggestions help you pull in connected notes automatically.

Up to 20 notes per collection

Entanglement

[[wiki link]][[wiki link]]
LIVE
Documents

PDFs, DOCX, EPUB โ€” uploaded or referenced

Upload documents directly to a collection or reference existing documents from your library. Text is auto-extracted, chunked, and ready for AI to search through.

10 uploaded docs ยท 50MB max ยท PDF, DOCX, EPUB, RTF, TXT, MD, XLSX
PDF
DOCX
EPUB
Extracting text...
URLs

Paste a link. We read the whole page.

Add any URL and Chunk's extraction engine automatically pulls the full text content. No more switching to your browser to re-read articles โ€” the content lives inside your collection.

Up to 20 URLs ยท Auto-extracted ยท Stored securely
arxiv.org/abs/2026.0...
Extracted โœ“
Research Reports

Deep research, delivered to your collection

Run a Deep Research query โ€” Chunk generates a comprehensive multi-source report with real-time progress tracking. The finished report lands directly in your collection, ready for AI chat.

Up to 20 reports ยท Markdown + PDF export ยท Multi-depth, multi-breadth
Gathering sources...0%
PDF
Quantum_Report.pdf
Conversations

Past AI chats become research context

Add any previous conversation to a collection. The full chat history becomes part of your AI's context โ€” so new questions build on old answers.

Full conversation history as context
What's the EPR paradox?
The EPR paradox is a thought experiment...
The Workspace

Your entire research.
One command center.

Toggle any source on or off. Switch AI models. Ask a question across everything.

Quantum Physics Research

12 sources ยท 147,200 tokens

๐Ÿ“ Notes
Quantum Mechanics Overview
Bell's Theorem Notes
EPR Paradox Summary
๐Ÿ“„ Documents
Feynman_Lectures_Ch9.pdf
Quantum_Computing_Primer.docx
๐Ÿ”— URLs
arxiv.org/abs/2026.04521
nature.com/articles/s41586...
en.wikipedia.org/wiki/Qubit
๐Ÿงช Research
"Quantum Error Correction 2026"
๐Ÿ’ฌ Conversations
"Thesis direction brainstorm"
"Supervisor feedback Q&A"
Claude Opus 4.6 โ–พ
147.2K / 200K tokens
Compare the EPR paradox's original argument with Bell's experimental results. Which sources in this collection are most relevant?

Based on your collection sources, here's a comparison:

EPR's Original Argument (1935)Your ๐Ÿ“ Bell's Theorem Notes and the ๐Ÿ“„ Feynman Lectures outline the core claim: quantum mechanics is incomplete...
Bell's Experimental ChallengeThe ๐Ÿ”— arxiv paper (2026.04521) and your ๐Ÿงช Quantum Error Correction report provide the experimental context...
Ask about your collection...

0K

Claude context windowtokens per collection chat

0K

GPT context windowtokens per collection chat

0M

Gemini context windowtokens per collection chat

Chunk tracks token usage per model in real time. Summarize sources with AI to fit more into your context.

URL Extraction

Paste a link. Read the whole page.
No browser needed.

When you add a URL to a collection, Chunk doesn't just store a bookmark. It uses its extraction engine and pulls the full text content of the page โ€” articles, papers, blog posts, documentation.

That extracted text becomes part of your collection's context. Toggle it on and ask AI about it alongside your notes, documents, and research reports. No more copying and pasting from browser tabs.

Extraction status is tracked per-URL: pending, extracting, complete, or failed. You always know what's ready.

  • Chunk Extract Engine
  • Up to 20 URLs per collection
  • Full text stored securely
  • Independent context toggles
Knowledge Sources4 added
arxiv.org/abs/...โณ Extracting...
Added to context
~2,400 tokens
Target Collection
200K limit
15.2K
28.4K
โœจ Summarize
โœจ -78%
8.1K
12.6K
Total Context Size:
64.3K / 200K tokens
Token tracking โ€” live per model
Token Management

Too much context?
Summarize it.

Every source in your collection uses tokens. A 20-page PDF might consume 15,000 tokens. A detailed research report could take 30,000. As you add more sources, you approach the model's context limit.

Chunk solves this with AI summarization โ€” powered by Claude Haiku, it compresses any source to a fraction of its original token count while preserving the key information. The summary replaces the full text in your context window, letting you fit more sources in.

The token meter updates in real time as you toggle sources on/off and summarize content. You always know exactly how much context budget you have left.

Collection Chat

Ask one question. Get answers from every source.

Collection Chat isn't a generic AI conversation. It's a context-aware session where every toggled source โ€” notes, documents, URLs, reports, and past conversations โ€” becomes part of the AI's working memory.

Ask AI to compare findings across your papers and notes. Request a summary that synthesizes your URLs with your research report. Ask it to identify contradictions between sources. The AI references specific sources in its response so you can trace every claim.

Physics Collection
Claude Opus 4.6 โ–พ
What do my notes say about decoherence vs. what the arxiv paper argues?
Your notes and the arxiv paper take complementary perspectives:
๐Ÿ“ Your Notes ("Decoherence")
You describe decoherence as the primary mechanism for the quantum-to-classical transition, emphasizing the role of environment entanglement destroying superposition states.
๐Ÿ”— arxiv.org/abs/2026.04521
The paper argues that decoherence alone is insufficient, presenting experimental evidence for a spontaneous collapse model that operates independent of environmental interaction.
Key Difference: Your notes focus on the theoretical framework, while the paper challenges it with new experimental data from the 2026 Bell tests.
Ask about your collection
Documents

0

"uploaded per collection"
Storage

0MB

"max upload size"
URLs

0

"with auto-extraction"
Notes

0

"as live references"
Reports

0

"deep research reports"
Use Cases

Built for how you actually research.

Literature Review

Compile 15 papers, your advisor's notes, three key URLs, and two deep research reports into one collection. Ask AI to identify themes, gaps, and contradictions across everything.

Collection contents
  • ๐Ÿ“ Advisor meeting notes (4)
  • ๐Ÿ“„ Published papers (6)
  • ๐Ÿ”— arxiv preprints (5)
  • ๐Ÿงช Research reports (2)
  • ๐Ÿ’ฌ Brainstorm session (1)

Competitive Analysis

Save competitor landing pages as URLs, upload their whitepapers, add your own strategy notes. Chat across everything to find positioning opportunities.

Collection contents
  • ๐Ÿ“ Strategy notes (3)
  • ๐Ÿ“„ Competitor whitepapers (4)
  • ๐Ÿ”— Competitor sites (8)
  • ๐Ÿงช Market research report (1)
  • ๐Ÿ’ฌ Team discussion (2)

Deep Hobby Dive

Researching quantum computing for fun? Collect YouTube explanations, Wikipedia deep-dives, your own notes, and a generated research report. Ask AI to explain it like you're 12.

Collection contents
  • ๐Ÿ“ Personal notes (5)
  • ๐Ÿ”— YouTube + Wikipedia (7)
  • ๐Ÿงช "Quantum Computing 101" report (1)
  • ๐Ÿ’ฌ Q&A conversations (3)

Notes

Notes feed Collections as live references. Edit a note and the collection always has the latest.

Explore Notes โ†’
You are here

Collections

The workspace that unifies everything.

AI Chat

Past conversations become research context inside Collections.

Back to Chunk โ†’

Simple, transparent pricing.

Choose the intelligence tier that fits your workflow.

Free

Get started instantly

$0
  • GPT-5-mini
  • 6 searches/day
  • 3 image generations/day
  • Notes + Knowledge Graph
  • Limited Collections
  • AI Memory
  • Document uploads
Start Free
Most Popular

Pro Monthly

Full power, no limits

$9.99/mo
  • All AI models (GPT-5.2, Claude Opus 4.6, Gemini 3 Pro)
  • 200 searches/day
  • 20 image generations/day
  • Full Deep Research
  • Unlimited Collections
  • Full Vector Search
  • Priority support
Get Pro
Best Value

Pro Yearly

Save 42% โ€” best value

$69.99/yr
  • All AI models (GPT-5.2, Claude Opus 4.6, Gemini 3 Pro)
  • 200 searches/day
  • 20 image generations/day
  • Full Deep Research
  • Unlimited Collections
  • Full Vector Search
  • Priority support
Start Free Trial

Your research deserves a
command center.

Stop scattering. Start synthesizing.

Start Researching FreePro unlocks unlimited collections at $9.99/mo