-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[LIP-9] Lens-Wide Verification Process #32
base: main
Are you sure you want to change the base?
Conversation
|
Should the lens-wide verification require periodic checks? I’m not sure what criteria is included in the reputation score, but seemingly that would change over time. Considering a verification from “lens” would carry more weight it should be pretty bulletproof. Regarding client-layer verifications, maybe a simpler way would be to allow pinning onchain attestations to your profile. If a client wants to open their verification to EAS, it would conceivably be simple to allow individual accounts to chose the attestation they want across all clients. (Honestly, don’t know how this works exactly, just spouting ideas). |
TBH, I didn't think of periodic checks yet. Thanks a lot for that input! I figured the main issue currently is to verify if a profile qualifies of eihter being a "known entity" or "someone who's trustworthy". Where the last one is either proofing who they are (some way or the other) and then putting their reputation on the line with every post they make - or by being someone who stays anonymous, but built up a reputation that way. In general, I would think: My instant take would be: I do think there should be something in place to again take away the "verified" status, in case some profile gets stolen and/or uses the verified badge to leverage credibitly. But in general, I think the verified badge is there to see if the owners behind a handle are who they say they are. Of course a reporting option could make sense. |
I really like the idea of having a shared "database" for establishing reputation or knowledge of a profile. Maybe, it would be useful to have a standard for apps to mint nfts for profiles with information encoded like, their likelihood to be a bot (from 0-100), their status on the app, and any information the user voluntarily serves to the app. Periodic checks are definitely a must, but that should be left up to the apps that are serving the data imo. |
Some takes to the approach:
Regarding web3 and to avoid single point error, the database should be decentralized in some way, f.e. it's pinned on IPFS or have backups somewhere so the apps could easily download the list anytime and anywhere.
How to ensure the integrity of the platform while the protocol may have dozens of them in the future? The rules of a single platform could be abused or compromised and then lower the quality of reputation score when submitting to the database. I think there should be a way to ensure consistently the transaparency and integrity of rules across each platform. |
Highly supportive on creation of global verification, either as metadata or even onchain. I would recommend to structure the verifications as either a) Global Properties or b) Local Properties. Global Properties would mean that a verification property can be used across all the apps (social verification or humanity verification) and local properties are used on application-level (other apps can use but by default the goal is not solve something globally). Curious to hear thoughts around creation of the verification properties that are split to global and local. Also what kind of framework there should be for new properties, could be governed by a LIP? |
I think we can all agree on that approach. Now, the global verification is quiet tricky. If we focus on score, it's possible that bots enter the score system and get the verification badge. I would suggest having curation systems. Selected users (can be selected by the community) can report a user that is globally verified and take down it's badge (showing proofs on why). On a side note: There is another option beside reputation score to give badges, and is by linking social medias (ex: Twitter, github), but some users may want to keep anonymous, so not a fit for everyone. |
omg its happening |
Full support |
Verifications - if they are supposed to be part of Lens - can only be global. Dapps can have their own systems and verify whatever they see fit, but those verifications are only relevant to the respective dapp. Also it's the question what a protocol like Lens is supposed to verify. Personally, I only see impersonations, name squatting,... as a problem. So it's relevant to companies, projects, celebrities (who is a celebrity, btw?). |
I just want to reply on the point towards squatting, name verification, ext. I think front ends should be able to push their own handles that they deem are for verified brands to this decentralized database for other front ends to copy and push to their own. |
I feel that this should not be set at a protocol level, some developer should create a product on top of lens with a dispute system and if it's good enough everyone will integrate it. It's also important to me to keep in mind decentralization and not just storing it on a database. |
Here are my thoughts on this and imagine how social media verification should work Problem statement: Confirm if an account is genuine, and then provide a verification badge to that account, instead of identifying the person behind the account for verification.
Ideally, through this verification, we should be able to mark any account legit until they start misusing them at any time they lose their status instantly making it much easier for users to use the platform. As I am writing this I am still trying to understand this complex verification problem and would love to get feedback from you guys on my thoughts. |
I was thinking about reputation/verification systems a lot during the development. But back to our LIP-9, or what we can do from the Lens team: I propose to build a separate contract (a "database"), which can contain entries for every Profile. And have a list of Trusted Verifiers (for example: Hey, Orb, Tape, Aave, etc). Trusted doesn't mean only they can add entries, it can still be permissionless, but at least it will be a good place for people to find which Entities were marked as Trusted in the first place (we can always transition later from that if needed). So when a profile wants to get verified, the following steps are done:
This system is open enough so anyone can build verification systems on top of it, but also centralized enough (i.e. everything is stored in one contract) to have good visibility and ease of use by the apps and other contracts. People can leverage composability to build additional contracts on top of that main VerificationRegistry, for example:
The key point here, is allowing many competing verification entries, which can later be deemed strong or weak. So, I guess the main thing we can start with - is building such a Registry. Regarding the Reputation system - it's a bit different, and might depend more on realtime and off-chain data (as posts content, comments, likes, reports, upvotes/downvotes etc). While it still can be on-chain, but it might require more bandwidth, involve ML, etc. This might be created as an RaaS (Reputation-as-a-service) API, which analyses all the data and calculates some score. The best of such systems can be determined by competition, and each app can try to build something like that, or use a general one, or some combination. One might build a reputation system based on staking/slashing on-chain. I propose to discuss the Reputation system as a separate LIP (unless somebody has a good vision that can combine both, but I don't have it rn). |
Some ideas i have been thinking about. Global Verification Properties could be unambiguous and based off a general framework that ties to various inputs (PoH, KYC, PQ) that gives the profile a score. Layered approach to verification, building on various "pillars" of evidence. This speaks less as to quality and engagement and more towards humanity. App verification should be separate and serve a different need. More qualitative in nature. Users with established reputations across certain apps can carry that trust over to new ones if recognized. For example, one application may do a good job of curating a small highly engaged audience and so a verification there could be extremely instructive for another application that wants to attract that audience. Serves as a visual indicator of trust within the community and section of the Lens graph - again this is more subjective and curation based. |
In general I think having verified accounts improves the UX a lot. If it's done right. And "done right" is very difficult imo and dependent on the community 🤔. (All the questions and thoughts are independent of a lens wide or app specific verification process.) |
These are valid questions. Account verification is the toughest problem and it becomes tougher in web3 space where everything needs to be managed in a democratic way. The solution to these kinds of issues could be separating personal and brand account verification. Personal account creation can be done like how we do now, but brand account creation should be done differently. We should have separate contracts that govern how brand accounts will be managed and it has a lot of similarities with owning any domain on the internet. We can have rules like
|
I like the idea of creating some type of curation system. Would definitely be needed imo |
I definitely support the goal of this LIP but I think the objective should be focused on building a database of public verifications and attestations (votes) rather than a universal claim about whether a profile is verified. The main issue I see with creating a protocol-level verification badge or score is that it doesn't have a single definition. It could mean: "whether a profile represents a human" or "whether a handle is shared across multiple platforms". The latter definition gets more complicated when you consider that a Lens profile can own multiple handles, and profile ownership / identities are constantly changing on Lens and other platforms as well. Instead of creating a label for applications to use, I think the role of the protocol should be to create a detailed registry of profile data, then let individual users or applications to use this data to make decisions either directly or through custom algorithms. In my opinion, the best places to continue scoping this LIP or to create new LIPs would be:
|
LIP-9 for setting up a lens-wide verification process
In light of LIP-5, which could/will open up the garden, a process for verified profiles becomes more relevant.
Currently, each Interface/App build on top of the protocol has their own way to verfiy profiles.
That makes it hard for new people to distinguish verified from non-verfied profiles, depending on the platform they got onboarded with.
(example: a verified profile on hey doesn't appear as verified on tape)
And for projects/brands it's a hassle to get verifid separately for each app/platform/interface.
Also, the newly introduced "reputation score" opens up new possibilities on that front.
Here's an approach to start the discussion:
Optional:
A "negative" entry. For example if a profile got stolen, impersonated, squatted, scams, etc...
So that this information can be accessed by all platforms across the protocol.
Tho this bears some risks, which probably should require a separate discussion (imho).