Age verification, verifiable credentials, and the privacy tension rising from the debate

ByyasoApr 11, 2026

I left a job in March 2025 after years of engineering and researching privacy and identity at the forefront of the KYC industry, more specifically with biometrics and liveness identification. KYC is the infrastructure built to verify who someone is before granting them access to a transaction. That same year, I was invited by the Ministry of Justice (by the secretary of Digital Rights, Dr. Lilian Cintra), to join the technical committee contributing to what would become Decree 12.880, which regulates the ECA Digital, Brazil's framework for protecting children and adolescents in digital environments.

The timing was not coincidental. The expertise that made me useful to the committee was exactly the expertise I had just walked away from: I know how identity verification gets built in a country with advanced digitization in all ordinary aspects of life, where people have their data leaking everywhere and are exposed to identity fraud with some frequency. I knew the default path. Biometrics, document scanning, identity data flowing through a commercial verification layer. It works, in the narrow sense. It also creates the potential for exactly the kind of data infrastructure that the ECA Digital is supposed to prevent.

So I arrived at that table in April with a specific obsession: that age verification, framed correctly, is not primarily a child protection problem. It is a privacy problem, an anti-censorship problem, and an infrastructure innovation problem. Those framings are not in tension with child protection. They are what makes child protection sustainable without turning the internet into a surveillance apparatus. I spent the following months watching the decree take shape and contributing to it, knowing precisely what would happen if that argument didn't land.

In October 2025, I was invited by the president of Dataprev to work there on all aspects of cybersecurity. By then I had been inside the policy process for months. I knew what the decree was trying to do. I also knew what I would find at the company: the existing direction was the default path I had spent those six months arguing against. My work there was, in significant part, a continuation of that argument, now from inside the implementation rather than the normative framework.

That argument eventually shifted not just the normative direction but the operational one. I knew the Ministry of Management and Innovation had already developed a small MVP for verifiable credentials in lending. It never got adopted, but the exercise was worth it: the team that built it is now the team developing verifiable credentials for age verification. I made the case that this was the right direction, that the MGI should be the issuer, and that the architecture should be built on open infrastructure. That idea gained traction. There is now a team developing it as a testbed, and I expect to have more to share soon. For now, if you want to go deeper, you can look at Dataprev's Inji fork, or you can ask me directly.

That shift is partially embedded in Decree 12.880 as well. Not just because of me, but because the team was absolutely committed to not opening the door for surveillance and privacy loss. The decree is better for it.

What the decree gets architecturally right

Decree 12.880 explicitly names verifiable credentials as the preferred technical approach for age signaling. App stores and operating systems are required to signal a user's age category to third party services, and the decree says this should be done, preferentially, through verifiable credentials. That is a meaningful architectural choice, and not an obvious one.

Verifiable credentials, when implemented correctly, allow an assertion to travel: "this person is above 18", without the underlying identity data that proves it. The claim moves; the document does not. The verifier learns that you qualify; it does not learn who you are or where else you've been verified.

Article 24 builds on this with constraints that are more principled than most frameworks manage. Data collected for age verification cannot be used for any other purpose, including, explicitly, behavioral profiling. Continuous automated data sharing is prohibited. Traceability of a citizen's identity and access history across verification events is also prohibited. The system is not supposed to know that you verified your age on Tuesday on one platform and Thursday on another.

These constraints describe, in normative language, what a privacy preserving system should do. They also describe what the KYC toolkit approach structurally cannot honor yet, which is part of why the architecture choice matters as much as the policy language.

The concrete bet: MOSIP/Inji, digital wallets, and an ecosystem that doesn't exist yet

Now, Dataprev is developing its solution on MOSIP/Inji, which is an open source digital public infrastructure stack built for exactly this kind of government identity use case. That choice matters. It reflects the same logic the decree encodes: that the infrastructure for verifying who someone is should be public, auditable, and not extracting value from the data it handles.

Brazil is not alone in making this bet. Europe is deep in the same design problem, with digital wallet prototypes now reaching the public as part of the EU digital identity framework. The underlying question is identical: how do you allow a citizen to assert a credential of age, nationality, professional qualification, etc, to a third party without that assertion becoming a surveillance event?

The honest answer is that no solution is perfect yet, because verifiable credentials depend on an ecosystem change that is still underway. The technology is sound. The standards are maturing. But the ecosystem requires answers to questions that are still open: who issues credentials, and under what authority? How does a verifier establish trust in an issuer it has never interacted with before? What happens when a citizen's credential is issued by one government and needs to be recognized by a platform operating under a different jurisdiction? These are not edge cases. They are the normal operating conditions of a global internet, and they don't have clean answers yet.

This is not an argument against verifiable credentials. It is an argument for being precise about what the technology solves and what it defers. It solves the data minimization problem elegantly, when implemented well. It defers the trust infrastructure problem to a set of governance decisions that are still being made in Brussels, in Brasília, and in the standards bodies where these architectures get specified.

Where the tension lives

The privacy tension in verifiable credentials is not in the standard, but in the surface surrounding it. A verifiable credential is a signed, structured claim. The issuer signs a credential attesting that the holder meets an age threshold. The holder presents it to a verifier, who checks the signature without contacting the issuer. The issuer doesn't learn where the credential is being used. The verifier learns only what the claim asserts. That's the design.

But that model has pressure points. If credential issuance requires a fresh interaction, which it often does in practice, then the issuance event itself becomes a surveillance surface. If credentials expire and must be refreshed, each refresh is a data point. If revocation requires checking a list maintained by the issuer, the issuer can infer usage patterns from query traffic. None of this is inherent to verifiable credentials. All of it is a design choice that implementations routinely get wrong by defaulting to convenience over privacy.

Biometrics are the sharpest version of the broader problem. The decree permits biometric analysis within its age estimation framework, and requires immediate, irreversible deletion of document images after extracting the necessary data. That prohibition is technically straightforward to implement. It is also easy to technically honor while building downstream infrastructure that learns from the data before deletion, or that retains derived signals rather than the raw artifact. The prohibition names the document. It doesn't automatically name everything the document produces. This is not a flaw in the decree's intent. It is what happens when regulatory language meets machine learning infrastructure without sufficient technical specificity, and it is exactly what I spent time at Dataprev trying to build upstream protection against - because this is a technology problem. (If the law was more prescriptive, it would stall innovation, as some like to say).

What has to come next

The principles in Article 24 describe what the system should not do. The work that determines whether those principles survive contact with procurement, engineering, and product decisions is in the ANPD's certification process for age verification solutions.

Certification that checks whether document images are deleted is necessary. Certification that also checks whether the system generates any persistent derived representation of the individual, whether credential issuance infrastructure logs events in ways that enable cross platform inference, whether revocation mechanisms create side channels to issuer side usage tracking, that is the certification that would honor the decree's intent rather than just its letter.

The normative framework is genuinely strong. I know, because I was in part of the room where it was shaped, and because I arrived there having already seen what gets built when the frame is wrong from the start. The verifiable credentials direction is right. MOSIP/Inji is the right kind of bet for Dataprev. Europe's digital wallet work is asking the same questions in parallel. What comes next: the implementation guidance, the certification criteria, the governance decisions about who issues what to whom, is where the privacy tension either gets resolved or gets quietly deferred until it becomes a scandal.

That part is still being written. In Brazil, in Europe, and everywhere else that is about to discover that age verification is harder than it looks when you decide to take privacy seriously.


Yasodara Córdova thinks and builds at the intersection of digital identity, fraud ecosystems, privacy, and why systems break in ways nobody planned for. She has developed work with Harvard University, the World Bank, W3C, the World Economic Forum, and SXSW, among others.