Cookie Consent

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Data Science Community Knowledge Base

What is random tokenization?

Random tokenization is a method to secure sensitive data such that, in the event of a breach or attack, the breached data has no value. Random tokenization masks sensitive information by generating a random string to represent the underlying data without any mathematical relationship, i.e. the token itself isn't encrypted and has no actual meaning. Tokens themselves aren't encrypted, but they represent actual values which are encrypted. Information cannot be derived from a token (as it's random and not mathematically related to any value) without access to a token vault of some sort in which the token and value mappings are stored and secured, commonly by encryption. Token values usually need to be authenticated via the token vault so information is not exposed while it's being verified. Random tokenization is used frequently, for example credit institutions commonly use it to mask card numbers.

View All Knowledge Base Questions

See how Devron can provide better insight for your organization

Request a Demo