Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[LIP-9] Lens-Wide Verification Process #32

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

seliqui
Copy link

@seliqui seliqui commented Feb 9, 2024

LIP-9 for setting up a lens-wide verification process

In light of LIP-5, which could/will open up the garden, a process for verified profiles becomes more relevant.

Currently, each Interface/App build on top of the protocol has their own way to verfiy profiles.
That makes it hard for new people to distinguish verified from non-verfied profiles, depending on the platform they got onboarded with.
(example: a verified profile on hey doesn't appear as verified on tape)
And for projects/brands it's a hassle to get verifid separately for each app/platform/interface.

Also, the newly introduced "reputation score" opens up new possibilities on that front.

Here's an approach to start the discussion:

  • The protocol sets up a "database" that holds a list of all verified profiles.
  • Each platform can submit to that database, adding new profiles as verifed - in that case, the badge would show "Verified by XYZ" on all platforms that aggregate the list.
  • New platforms can apply to also be able to submit to that database
  • Each platform can choose to aggregate that list, but of course doesn't have to (tho it's recommended).
  • If you have a reputation score above a defined threshold, you can apply for a general/lenswide verfication
  • A "light verification" can be done individually by going through another process (like linking a github profie, etc...).

Optional:
A "negative" entry. For example if a profile got stolen, impersonated, squatted, scams, etc...
So that this information can be accessed by all platforms across the protocol.
Tho this bears some risks, which probably should require a separate discussion (imho).

LIP-9 for setting up a lens-wide verification process
Copy link

height bot commented Feb 9, 2024

Link Height tasks by mentioning a task ID in the pull request title or commit messages, or description and comments with the keyword link (e.g. "Link T-123").

💡Tip: You can also use "Close T-X" to automatically close a task when the pull request is merged.

@huugo482
Copy link

Should the lens-wide verification require periodic checks? I’m not sure what criteria is included in the reputation score, but seemingly that would change over time. Considering a verification from “lens” would carry more weight it should be pretty bulletproof.

Regarding client-layer verifications, maybe a simpler way would be to allow pinning onchain attestations to your profile. If a client wants to open their verification to EAS, it would conceivably be simple to allow individual accounts to chose the attestation they want across all clients. (Honestly, don’t know how this works exactly, just spouting ideas).

@seliqui
Copy link
Author

seliqui commented Feb 10, 2024

TBH, I didn't think of periodic checks yet. Thanks a lot for that input!

I figured the main issue currently is to verify if a profile qualifies of eihter being a "known entity" or "someone who's trustworthy". Where the last one is either proofing who they are (some way or the other) and then putting their reputation on the line with every post they make - or by being someone who stays anonymous, but built up a reputation that way.

In general, I would think:
No matter if it's a project/brand or a personal profile - once a profiles ownership has been assigend&verified, it should be very hard to remove that status again.

My instant take would be:
If a profile represents a project/company, then it shouldn't matter what that profile does (in light of the verification ofc), since that presents whatever the project stands for. Whether you like it or not. It's then up to them to justify/eplain their position.
Similiar to personal profiles (tho there's the risk higher for them to get hijacked)

I do think there should be something in place to again take away the "verified" status, in case some profile gets stolen and/or uses the verified badge to leverage credibitly. But in general, I think the verified badge is there to see if the owners behind a handle are who they say they are.

Of course a reporting option could make sense.
Post scammy shit repeatedly = loose your badge.
(doesn't mean you can repeat your shit. Just means it gets less credibility while doing it)

@ZKJew
Copy link
Contributor

ZKJew commented Feb 11, 2024

I really like the idea of having a shared "database" for establishing reputation or knowledge of a profile. Maybe, it would be useful to have a standard for apps to mint nfts for profiles with information encoded like, their likelihood to be a bot (from 0-100), their status on the app, and any information the user voluntarily serves to the app. Periodic checks are definitely a must, but that should be left up to the apps that are serving the data imo.

@hjjlxm
Copy link

hjjlxm commented Feb 11, 2024

Some takes to the approach:

The protocol sets up a "database" that holds a list of all verified profiles.

Regarding web3 and to avoid single point error, the database should be decentralized in some way, f.e. it's pinned on IPFS or have backups somewhere so the apps could easily download the list anytime and anywhere.

Each platform can submit to that database, adding new profiles as verifed.

How to ensure the integrity of the platform while the protocol may have dozens of them in the future? The rules of a single platform could be abused or compromised and then lower the quality of reputation score when submitting to the database. I think there should be a way to ensure consistently the transaparency and integrity of rules across each platform.

@EthWarrior
Copy link
Contributor

Highly supportive on creation of global verification, either as metadata or even onchain.

I would recommend to structure the verifications as either a) Global Properties or b) Local Properties.

Global Properties would mean that a verification property can be used across all the apps (social verification or humanity verification) and local properties are used on application-level (other apps can use but by default the goal is not solve something globally).

Curious to hear thoughts around creation of the verification properties that are split to global and local.

Also what kind of framework there should be for new properties, could be governed by a LIP?

@JuampiHernandez
Copy link

  • Global Verification -> user is real.
  • Local verification -> Status/inside perks on each app.

I think we can all agree on that approach. Now, the global verification is quiet tricky. If we focus on score, it's possible that bots enter the score system and get the verification badge. I would suggest having curation systems.

Selected users (can be selected by the community) can report a user that is globally verified and take down it's badge (showing proofs on why).

On a side note: There is another option beside reputation score to give badges, and is by linking social medias (ex: Twitter, github), but some users may want to keep anonymous, so not a fit for everyone.

@Gifty7
Copy link

Gifty7 commented Feb 11, 2024

omg its happening

@gps1998
Copy link

gps1998 commented Feb 11, 2024

Full support

@carstenpoetter
Copy link

Verifications - if they are supposed to be part of Lens - can only be global. Dapps can have their own systems and verify whatever they see fit, but those verifications are only relevant to the respective dapp.

Also it's the question what a protocol like Lens is supposed to verify. Personally, I only see impersonations, name squatting,... as a problem. So it's relevant to companies, projects, celebrities (who is a celebrity, btw?).
I don't know much about the various attestation services that are around. But the ones I have interacted with can't verify a brand, iirc.
So the only option seems to be that if a brand is approaching, e.g. Hey and is requesting a verification and Hey verifies it, that this information is available to all other dapps in the ecosystem. Though, if Lens is decentralized, this information can't be stored on a company server or whatever.

@ZKJew
Copy link
Contributor

ZKJew commented Feb 11, 2024

Verifications - if they are supposed to be part of Lens - can only be global. Dapps can have their own systems and verify whatever they see fit, but those verifications are only relevant to the respective dapp.

Also it's the question what a protocol like Lens is supposed to verify. Personally, I only see impersonations, name squatting,... as a problem. So it's relevant to companies, projects, celebrities (who is a celebrity, btw?). I don't know much about the various attestation services that are around. But the ones I have interacted with can't verify a brand, iirc. So the only option seems to be that if a brand is approaching, e.g. Hey and is requesting a verification and Hey verifies it, that this information is available to all other dapps in the ecosystem. Though, if Lens is decentralized, this information can't be stored on a company server or whatever.

I just want to reply on the point towards squatting, name verification, ext. I think front ends should be able to push their own handles that they deem are for verified brands to this decentralized database for other front ends to copy and push to their own.

@JosepBove
Copy link

JosepBove commented Feb 11, 2024

I feel that this should not be set at a protocol level, some developer should create a product on top of lens with a dispute system and if it's good enough everyone will integrate it. It's also important to me to keep in mind decentralization and not just storing it on a database.

@anjaysahoo
Copy link

Here are my thoughts on this and imagine how social media verification should work

Problem statement: Confirm if an account is genuine, and then provide a verification badge to that account, instead of identifying the person behind the account for verification.

  1. Verifying an account based on whether a person is real or not using any method other than biometric or physical verification is pointless. Verification through an on-chain activity which helps us to score them should be the right method I believe.
  2. Also, these verification statuses should be dynamic based on the account score we allot them at that given time rather than the user requesting them
  3. Verification can be divided into two categories with only max 3 levels of verified status to keep things simple
  • Personal: Which will be given based on the score
  • Brand: This will be given based on other platform activity like Github, Twitter, etc as they can't do much at the start to have a good score
  1. The score that eventually leads to verification status should be like how we ourselves try to verify an account but it will much better than what we do.
  • So to verify any account we try to see who follows them if someone verified or known follows them then it is a big plus
  • Account’s post liked or reposted by high score account is also a big plus
  • Allow reporting any account easier again gives more weightage to verified account reports
  • Many other parameters with different weightage
  1. Only protocol-level verification should be there as allowing the app to contribute to verification can complicate things also it will create bias as each app will have its own algorithm for verifying an account which is not agreed upon by other apps. Even though there will be a distinction between two types of Global & App verification, allowing two types of verification in the first place which enables apps to use this creates confusion at the user level and social media starts looking like a game we are playing. Apps can have their own setup for this if they want.

Ideally, through this verification, we should be able to mark any account legit until they start misusing them at any time they lose their status instantly making it much easier for users to use the platform.
I know there could be multiple challenges to achieve this but this is how I imagine verification on social media should work.

As I am writing this I am still trying to understand this complex verification problem and would love to get feedback from you guys on my thoughts.

@vicnaum
Copy link
Contributor

vicnaum commented Feb 12, 2024

I was thinking about reputation/verification systems a lot during the development.
This kind of system actually can be already built by anyone (maybe it already exists!), and the only thing it requires is social trust.
I like to look at this problem as Michelin ratings - where in 1900s a car tyre manufacturer started rating restaurants, and now it's one of the most reputable independent ratings. Same thing can happen on-chain - there might appear an entity, which provides really good verification methods, or reputation rankings, and if it's seemed good by most of people - it can be used by any app or contract. And this might even get monetized - either pay-to-access the rating (by contracts, or paid API), or pay-to-be-verified approach (if verification requires expenses, or like twitter $8 "verification"). I'm encouraging builders to try different methods and build such a system - there's certainly a lack of such thing on the market.

But back to our LIP-9, or what we can do from the Lens team: I propose to build a separate contract (a "database"), which can contain entries for every Profile. And have a list of Trusted Verifiers (for example: Hey, Orb, Tape, Aave, etc). Trusted doesn't mean only they can add entries, it can still be permissionless, but at least it will be a good place for people to find which Entities were marked as Trusted in the first place (we can always transition later from that if needed).

So when a profile wants to get verified, the following steps are done:

  1. Profile applies to Hey (for example) and asks for Verification
  2. Hey asks for any details required (github account, twitter, credit card, SSN, orb retina scan, whatever they decide is good and enough)
  3. Profile provides all the details required via Hey interface/API/etc
  4. Hey goes to this Verification Registry contract and adds an Entry to that Profile from their Verifier address
  5. Entry appears on-chain in the contract:
ProfileID#12345:
  - Verification from 0xHEY:
    - verificationHash (or any details that Hey wishes to provide that can prove what exactly they verified)
    - profileOwnerDuringVerification
    - timestampOfVerification
    - ...
  - Verification from 0xORB:
    - ...
  1. Any other App, User or contract can check which verification the given Profile has and calculate their score based on the number of verifications, their quality (different verifiers might have different reputation), etc.
  2. If the profile behaves badly, is reported, stolen, transferred - Hey can invalidate and withdraw their Verification (here we have the periodic checks aspect mentioned above)

This system is open enough so anyone can build verification systems on top of it, but also centralized enough (i.e. everything is stored in one contract) to have good visibility and ease of use by the apps and other contracts.

People can leverage composability to build additional contracts on top of that main VerificationRegistry, for example:

  • Contract that automatically submits a Verification Entry if a Profile holder also holds a specific NFT (ENS, BoredApe, rAAVE, etc), and withdraws it if user stops holding it
  • Oracle contract that create a Verification Entry based on something that is done in the real world off-chain (Banking, Phone numbers, Worldcoin orbs, etc)
  • Social Verification contract - already verified and popular users (Stani, Vitalik, ZachXBT, ...) can vouch for somebody to be verified by signing a message with their Profiles - and the contract submits a Social Verification Entry (containing their signatures, for example) vouching for a given Profile (and then those profiles can vouch for others, maybe requiring more signatures on each level)

The key point here, is allowing many competing verification entries, which can later be deemed strong or weak.
We can also have negative entries (reports about stealing, scam, etc) - which also can come from different agents, some strongly trusted, some not. Making this decentralized.

So, I guess the main thing we can start with - is building such a Registry.

Regarding the Reputation system - it's a bit different, and might depend more on realtime and off-chain data (as posts content, comments, likes, reports, upvotes/downvotes etc). While it still can be on-chain, but it might require more bandwidth, involve ML, etc. This might be created as an RaaS (Reputation-as-a-service) API, which analyses all the data and calculates some score. The best of such systems can be determined by competition, and each app can try to build something like that, or use a general one, or some combination. One might build a reputation system based on staking/slashing on-chain. I propose to discuss the Reputation system as a separate LIP (unless somebody has a good vision that can combine both, but I don't have it rn).

@0xchristinab
Copy link

0xchristinab commented Feb 13, 2024

Some ideas i have been thinking about. Global Verification Properties could be unambiguous and based off a general framework that ties to various inputs (PoH, KYC, PQ) that gives the profile a score. Layered approach to verification, building on various "pillars" of evidence. This speaks less as to quality and engagement and more towards humanity.

App verification should be separate and serve a different need. More qualitative in nature. Users with established reputations across certain apps can carry that trust over to new ones if recognized. For example, one application may do a good job of curating a small highly engaged audience and so a verification there could be extremely instructive for another application that wants to attract that audience. Serves as a visual indicator of trust within the community and section of the Lens graph - again this is more subjective and curation based.

@punkess
Copy link

punkess commented Feb 17, 2024

In general I think having verified accounts improves the UX a lot. If it's done right. And "done right" is very difficult imo and dependent on the community 🤔.
What is “verified” supposed to mean in the context of Lens (and this LIP)?
-> Imagine someone has a carnival business in Brazil/Cologne/... with the domain metamask.fun and asks to get its lens handle @lens/metamask verified. Should this user be the verified @lens/metamask account?
What is the criteria to become verified, is it possible to “hardcode” a process?
How can it scale / how time consuming should it be for projects/protocols/users to get verified?

(All the questions and thoughts are independent of a lens wide or app specific verification process.)

@anjaysahoo
Copy link

In general I think having verified accounts improves the UX a lot. If it's done right. And "done right" is very difficult imo and dependent on the community 🤔. What is “verified” supposed to mean in the context of Lens (and this LIP)? -> Imagine someone has a carnival business in Brazil/Cologne/... with the domain metamask.fun and asks to get its lens handle @lens/metamask verified. Should this user be the verified @lens/metamask account? What is the criteria to become verified, is it possible to “hardcode” a process? How can it scale / how time consuming should it be for projects/protocols/users to get verified?

(All the questions and thoughts are independent of a lens wide or app specific verification process.)

These are valid questions. Account verification is the toughest problem and it becomes tougher in web3 space where everything needs to be managed in a democratic way.

The solution to these kinds of issues could be separating personal and brand account verification.

Personal account creation can be done like how we do now, but brand account creation should be done differently. We should have separate contracts that govern how brand accounts will be managed and it has a lot of similarities with owning any domain on the internet. We can have rules like

  1. Everyone is free to create a brand account but verification is done through a tougher process which they need to clear
  2. The account should be tradeable so any company that wishes to buy that account can purchase it which is possible with a personal account also but in this, we should have the capability to delete the past posts, followers, etc from that account so that when any company own that account then they get a new account

@benalistair
Copy link

  • Global Verification -> user is real.
  • Local verification -> Status/inside perks on each app.

I think we can all agree on that approach. Now, the global verification is quiet tricky. If we focus on score, it's possible that bots enter the score system and get the verification badge. I would suggest having curation systems.

Selected users (can be selected by the community) can report a user that is globally verified and take down it's badge (showing proofs on why).

On a side note: There is another option beside reputation score to give badges, and is by linking social medias (ex: Twitter, github), but some users may want to keep anonymous, so not a fit for everyone.

I like the idea of creating some type of curation system. Would definitely be needed imo

@defispartan
Copy link

I definitely support the goal of this LIP but I think the objective should be focused on building a database of public verifications and attestations (votes) rather than a universal claim about whether a profile is verified.

The main issue I see with creating a protocol-level verification badge or score is that it doesn't have a single definition. It could mean: "whether a profile represents a human" or "whether a handle is shared across multiple platforms".

The latter definition gets more complicated when you consider that a Lens profile can own multiple handles, and profile ownership / identities are constantly changing on Lens and other platforms as well.

Instead of creating a label for applications to use, I think the role of the protocol should be to create a detailed registry of profile data, then let individual users or applications to use this data to make decisions either directly or through custom algorithms.

In my opinion, the best places to continue scoping this LIP or to create new LIPs would be:

  1. Profile Metadata - Users making claims about themselves such as linking social profiles that can be validated by (3)
  2. ENS Records - Link Lens profile to ENS domain based on profile owner or special lens text record
  3. Create standard types of attestations for all users or specific groups to vote on profile characteristics: profile compromised, user verified themselves on social platform / frontend, etc.. Voting could happen directly on Lens publications or profiles using open actions cc @iPaulPro

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.