Skip to main content
SearchLoginLogin or Signup

What could we build?

Published onApr 30, 2019
What could we build?

Simple possibility: A tool or process that would help news and others address mistrust. Scaling w or w.o platform blessing

  1. Track. What you’ve seen [chaslot, ccadw] : bookmarklet.
    +Existing tools to use or parallel: ideal.
    ||: Litterati — enough to tag litter online; shame the gov. [pothole-like]
    +Plugin (what you’ve seen) worked well for ProPublica until FB disabled it
    + Most frustrating: you have no idea atm whether you’re looking at what did harm that day. Data that can claim to span everything is very satisfying

  2. High-Q realtime dataset that others can build on.
    G:from crowdgames -good ones have shared mission, feedback, advancement. What sense that you’re working toward something?
    Roadmap towards becoming a power user.

    1. Take public content. Let it be flagged and bucketed. Create critical mass: that’s a wedge into understanding what’s cited/used across platforms. Generically valuable.

    2. An “uh-oh!” button. This might be wrong; iterate. [look at WP escalation]

    3. Passively generated data, realtime (within minutes), possibly built into platform algorithms

    4. enough metadata provided by trusted orgs (on material flagged by anyone?) to decide what should or shouldn’t be amplified. [imagine doing this for one day, showing possibility?]

    5. Last weeks: we’ve seen reports at fb/tw that they distinguihsed b/t high and low Q info. didn’t bc of fear that it wsould have political bias.
      currently our 3d party fact checking scheme works w ffb: any member can flag info that may be fals. checkers pull things out of that queue to check. [hasn’t said publicly how well that’s going.] FB is about to share that data with a few researchers?

    6. We need more than just ‘what some people flag as wrong’, we need all of it [the posts/ads my grandmother sees].

  3. Our grassroots to contribute, games to run: PublicEd, Libraries. [tools that might work even w/o network effects]

    1. Ex games: give out an article, ask ppl to circle facts. half were not.
      Every movement that got pickup were nagged/driven by kids and stories. Kids love to be actors and specialists.
      Imagine that you gamify capturing URLs. send submitters updates whenever those URLs are used; send URLs in clusters to congress;

    2. Ex: app security checks: let your data speak for you via/after a self-check

    3. Mark users as trusted news consumers (?)

    4. self-governance?

  4. Data escrow? Collective bargaining and resolving lack-of-cooperation.
    Asking for specific kinds of data? Motives of platforms deeply misaligned with ours. maybe they will compromise b/t our interest and theirs.

  5. Layering: have the public db with who said it, and its cred; also where people put their product (in fact checking). What’s the equiv of Maps that builds on these data components?

    1. This might not initially have to help everyone. Enough to help some constituents a lot.

    2. Track everything everyone says [not so huge in our world] G used to have ‘quotes‘ search in 2007, pulled from speeches and interviews. ClaimReview works for factchecking. At the statement level: we should check pundits as well as politicians. Most checkers don’t do that.

    3. Lay the roads to build the tools. [sunny: yes]

    4. FB ads used as smokescreen (even in own minds) [cwiggins]

  6. Consolidation of ideas. Platforms are frustrated by the # of things they’re getting pitched. Pooling.

    1. Consolidation of existing tools / what’s out there

Ex: behavior challenge for people. What worked for them? Lacked tools, had to paste in data from many platforms. Made a change within a week; Invest in using a tool, help those people become active users / propagators in their community.

  • Plugin for generating flags. Tie into ad-aware and VRM tech?

  • Set up meeting for EU in a year


Doable in public: Fact-checkers; find problematic content; sort it to partners. Must be reportable + anonymized.

Consider W user taxonomy. anons, auto, admin. Low auto.
Should this be with the plats or outside. (outside! to keep trust)
Privacy + tech issues? It travels w/ you to all platforms…
Now you’re comfortable; status can move with you.

Plugin action: screenshot sharing?
Recruiting a growing network of info purifiers.


+Fundamental problems
~ right now people have to beg for basic data. Or easy ways to gather this. We have lots of tools that came and went, unless they were like ABP (a user tool/black+graylist).

+egg problem: similar to FactCheck? Not useful until lots of people use it…
What’s a high-impact use case that would be delightful?

+manipulation: need a founding network + metamod.
see W, Y, G.Local Guides. [LG: tiers of use + trust + auth]

+pitch to platforms:
Offer a useful stream? (share existing useful streams)
Share data w/ major regulators?
What principles can people use? how can they act?
What might active communities do with it?
Vid on YT: flagged by local guide: auto-submit trusted flagger program
(accel review and delisting!)
NB: most of their case reviews are for gray areas!
Channel for purists to suggest new categories to act on, detune

+pitch for system:
constellation of flags
viewers — uh-on
local-guide equiv vets/clusters
fc.o — adds flags (already happens for platforms)
framework selection // privacy
Then go to each participant and ask what they need
(sandro’s Ontology of harms)
Ru/bot db. CredCo db. GDI. ScienceFeedback/HealthF

+pitch for people:
what does this look like that’s better for readers
fix what’s amplified.
chrome ext improves life for the viewer [ABP: 100M]
storytelling around the process: why do this
>> catalog existing public startups
>> ask them how that’s working; have an event to build interfaces
>> define the data: how are things added, where does it go, does it exist?


Product design
CIVIC subproduct
,Community spin-up
,Partnerships (plugins, ad-block, other)

:: properties: copiable, forkable, provenanced, unrestricted, speed.
:: message: [information response; hoax blocker/hate blocker]
:: catalog: past hacks on similar ideas. invite to reprise / rebase / refactor
[Small team working together for several months? — bold but fragile]

Idea: tech is a long pull

per-badge: user trust; self-reported judgement trust.
motivation: positive flagging.
be humble, promise to share the data we gather. get commitments to *

Once we have a community like this, they will love to create feeds (of high profile quotes) and try to look at each item on them


Q: is this for fixing platforms or fixing people? If you want to change individual behavior, that’s different.

FullFact sees: Tools for monitoring, flagging, trend-spotting, remediation? +[ask mevan]
How to inoculate people? Reducing susceptibility.

At some point, there was the idea of hosting copyright statements for the diff platforms. At another, G provided info about takedowns.

Crusade —> business —> racket

Arc of discussion:

No comments here
Why not start the discussion?