2026 Writing Proposal — Bikram Biswas

2026 Writing Proposal — Bikram Biswas
none 0.0 0

Proposal overview

Hi .. I’m Bikram Biswas. I found Nym in 2023 and since then I’ve been quietly doing two things: learning the technical side of metadata and mixnets, and listening to the people who live with surveillance every day. This is my personal proposal to write a series of pieces through 2026 — but not as a faceless content program. This is me, taking the time to turn research, interviews, and lived experience into readable, useful stories that help people understand what’s happening to their data and what they can do about it.


Author / contact

Bikram Biswas — Nym community member (since 2023), privacy researcher and writing.



Objectives

  • Produce readable, well-sourced pieces that translate complex privacy threats into usable understanding for Nym’s audiences.

  • Center lived experience and ethical reporting: anonymized case studies and careful redaction.

  • Demonstrate the value of human judgment in privacy research (context, safety decisions, narrative craft).

  • Provide actionable recommendations and clear threat-model guidance for users and the Nym team.


Scope (what I will deliver)

Demo Content deliverable (first):

  • One long-form article: If You’re Not Valuable, Why Are They Watching You? — 1,600–2,200 words, web-optimized, with FAQ and two simple graphics.

Series deliverables (optional, if funded):

  • Up to 12 articles (each 1,200–2,500 words depending on type), each with bibliography, anonymized interview notes (encrypted), and SEO metadata.

  • For each published article: short editorial note on redactions and research method.

  • Private review copies (secured links) for invited reviewers prior to public release.

Here is the docs contain Demos..of 12 Article ( Only team can view )


Pilot article — deliverable detail

Title (working): If You’re Not Valuable, Why Are They Watching You? — How Metadata Targets Ordinary People
Length: 1,600–2,200 words
Structure: Intro vignette → problem thesis → plain explanation of metadata and aggregation → 2 anonymized case studies → practical individual/structural mitigations → role & limits of mixnets/Nym → policy asks → FAQ → editor’s note on safety.
Assets: 2 simple graphics (how metadata maps a routine; what mixnets protect vs what they don’t).
SEO: slug /why-surveillance-targets-ordinary-people; target keyword why surveillance targets ordinary people; meta description provided.
Safety: interviews encrypted, exact timestamps/GPS/device IDs removed, paraphrase preferred when risk exists.

(Full demo/article outline and SEO block are appended in the proposal.)

Plan to write one clear demo piece first, …titled “If You’re Not Valuable, Why Are They Watching You?”. It’s a single long-form article (1,600–2,200 words) that reframes the familiar “nothing to hide” pushback by showing how ordinary metadata and routine logs become powerful surveillance tools when aggregated. I’ll open with a human vignette — a carefully anonymized account of a mother whose welfare payment was blocked after an automated system flagged her household and use that to ground the argument: privacy is not about hiding wrongdoing; it’s about protecting everyday life from being scored and acted upon by distant systems.

After the opening, I’ll explain — in plain language and without math — how metadata works: the who/when/where of calls, app use, and movement. I’ll use a simple metaphor (footprints on a city map) so readers see how fragments combine into a reliable picture of routine. I will name the actors plainly: telecoms that log Call Detail Records, platforms that keep friend graphs and engagement traces, ad and analytics companies that turn signals into value, and state agencies that sometimes demand access. I will emphasize incentives: some of this data collection is commercial, some regulatory, and some framed as security — but the harms are real regardless of intention.

I will include two anonymized case studies, carefully redacted and sourced: one showing how aggregated logs led to an unfair administrative decision, and another showing how app logs and scoring damaged a gig worker’s income. These stories will not be sensational; they will be quiet and bureaucratic, because most of the harm is quiet. For every case I publish, I will either rely on public reporting or a voluntary interview where I obtain informed consent and agree how the source should be identified (name, pseudonym, or anonymized). If any detail could re-identify someone, I’ll paraphrase, not quote.

On practical advice, the article will give realistic, small-step guidance: think in threat models, reduce unnecessary sharing, audit app permissions, and use tools appropriate to your threat — and then a candid paragraph on what those tools do and don’t do. I’ll explain how mixnets and NymVPN help reduce metadata correlation and flow analysis, but I’ll also be clear that they don’t fix endpoint compromise or protect a user who voluntarily posts identifying info. I want the reader to leave with non-alarmist steps they can try today and a sense of why systemic change — shorter retention, auditability of automated decisions, and transparency around state access — matters.

To make the article discoverable and useful, I’ll follow a simple SEO and publishing plan. Target keyword: why surveillance targets ordinary people. Title: If You’re Not Valuable, Why Are They Watching You? — How Metadata Targets Ordinary People. Meta description: “Why ‘nothing to hide’ misses the point — how everyday data and logs make ordinary people valuable to surveillance, and what you can do about it.” I’ll include a short FAQ for rich snippets (Does “nothing to hide” mean I don’t need privacy? What is metadata? Can a VPN protect me?), two small graphics (a “how metadata maps a routine” diagram and a “what mixnets protect vs what they don’t” chart), and 1–2 internal links to Nym resources (what is metadata; how mixnets work) plus 2–3 authoritative external citations (Access Now, OONI, investigative reporting).


Research approach & methods

  • Source sweep: peer-reviewed papers, NGO reports, investigative journalism, public filings.

  • Synthesis: human pattern mapping — connecting law, tech, economics, and human cost.

  • Verification: every factual claim cites at least one verifiable source; interview claims labeled “interview — anonymized.”

  • Tooling: LLMs used only for summarizing public documents, compiling bibliographies, and grammar/structure suggestions. No LLM analysis, no narrative construction, no redaction decisions.


Ethics & safety checklist

  • Obtain informed consent and offer anonymity options to interviewees.

  • Prefer paraphrase over verbatim quotes when detail could identify a source.

  • Honest and Bold opinions not propoganda funded for rich people.



Budget (practical & transparent)

Pilot article (per-piece budget example): $150 (midpoint)

Series total (12 articles): $2,000–$5,000 USD (range depends on depth and type of articles needing to deliver) It can be low or high.

I’m asking for modest funding to make this practical. Suggested per-article support is $100–$200. For a small run of a dozen pieces in 2026, that adds up to $2,000–$5,000. That money isn’t for marketing; it covers things that make ethical, careful research possible: I also provided a one-page Statement of Work (SOW) and a short budget breakdown showing exactly how funds will be used before any money changes hands.


Statement of Work (SOW) — summary (what I will do if commissioned)

  • Here’s the SOW I will follow for the pilot article. Step 1: secure commission and sign SOW. Step 2: research sweep — gather verified public reports and identified secondary sources. Step 3: interviews — reach out to and complete 2–4 anonymized interviews, with consent and encrypted recording. Step 4: draft — write 1,600–2,200 words and label interview placeholders for editorial review. Step 5: private review — circulate a secured draft to 2 invited reviewers for a redaction and legal/safety pass. Step 6: final edit, visuals, and SEO metadata, then publish or gate according to the agreed plan. I’ll invoice Net 30 and provide receipts for all third-party expenses (transcription, graphics, legal review).

13. Deliverables (explicit)

  • Secured Draft (PDF/Docx) for invited reviewers.

  • Final article in Markdown (CMS ready) with images and metadata.

  • Short editorial note on redaction and research methods.

  • Expense receipts for third-party costs (if requested).


Why a human author matters (short)

LLMs can summarize, but they cannot make protective editorial judgments, ethically anonymize sensitive human accounts, or translate regional legal nuance into responsible, actionable advice. I will do the interviews, weigh redaction tradeoffs, and craft narratives so the piece is truthful, safe, and resonant.


Why me? Because this is not just technical translation … it is interpretation and moral framing. An LLM can summarize a court order or list NIST dates. It cannot decide how to anonymize a person’s story so that it remains truthful without putting them at risk, nor can it choose which human detail makes a complex argument feel real. My value is judgment: choosing what to include, how to rephrase a quote so a source is safe, and how to connect a technical risk to a human life in a way that makes sense to ordinary readers.


Thanks for reading