Data privacy refers to the rules, practices and guidelines used to determine the way someone’s information is collected, protected and stored, as well as who has access to it. This data can come from individual users or organizations, and it can include anything from email addresses to financial histories to health information.
What Is Data Privacy?
Data privacy is the practice of protecting personal, private or sensitive information, ensuring it’s collected with the proper consent, kept secure and used only for authorized purposes, while respecting both individual rights and existing regulations.
The three pillars of data privacy are consent, transparency and security. In action, that looks like an organization informing individuals if and why their data is being collected, obtaining permission to use it for specific purposes (and being willing to delete or redact it if asked to do so) and taking steps to properly protect data in order to prevent unauthorized access or theft.
While individuals and organizations alike can take measures to shore up their privacy strategies, much of data privacy today is driven by various data protection laws created by governments around the world, which require compliance from companies that want to collect and handle people’s data.
Why Is Data Privacy Important?
Virtually all aspects of daily life are carried out on the internet — be it corresponding, shopping or scheduling appointments. And every time we go on the internet, we often leave little breadcrumbs of personal information behind, like an email address, credit card number or location. The companies that operate the websites we visit gather that information and use it in a variety of ways. Sometimes they even share or sell that data via data brokers.
1. Data Privacy Is Important for Companies
With data privacy standards in place, that sensitive information is safeguarded, and any unauthorized access, sharing or theft is prevented. Breaches or other misuse can result in significant regulatory consequences for companies, including fines and lawsuits, as well as damage to their brand and reputation.
“Part of offering great customer service is making sure that your customers’ data stays private and is not being compromised,” Vanessa Edwards, a former compliance analyst at Mailchimp, told Built In. “If your data gets compromised, it could lead to some really severe consequences, and you don’t want to put any of your customers through that headache.”
2. Data Privacy Is Important for Individuals
Indeed, lapses in data privacy can be a pain for individuals as well, who may experience financial loss, compromised social media accounts or even identity theft. Data privacy is an important way for people to feel in control of their data, and confident that it is protected when they hand it over to a company.
“At the end of the day, consumers are the lifeblood of [companies’] business. And the consumers interact with them through data,” Dimitri Sirota, the CEO of data privacy company BigID, told Built In. “Businesses have an obligation to not only provide that transparency to consumers, but also provide the capability of meeting the needs of the various regulations in the jurisdictions where they operate.”
Data Privacy Challenges
The need for stronger data privacy measures has become more urgent in the face of increasing security threats, including the latest trends:
1. Online Activity Tracking
With people spending more than six hours per day on the internet, companies gather troves of personal information from consumers’ digital footprints. Websites may ask users for permission to record their activity through cookies, but users may still not understand what types of data are being collected and to what extent. As a result, businesses may end up compiling and sharing sensitive information that users aren’t comfortable with.
2. IoT Devices
IoT devices have become widespread in the U.S., with over 60 million households using smart devices. While these technologies make everyday tasks more convenient, the connections they rely on also create more potential entry points for malicious actors. Developing cybersecurity solutions tailored to IoT tools is imperative, especially as the number of IoT devices expands.
3. Rapid Data Growth
The exponential growth of data makes it difficult to manage and process information, leading to data that gets overlooked and likely exposed to hackers and other parties. While sectors like IoT and cloud computing are supposed to address this issue, massive volumes of data may outpace these technologies’ ability to analyze it.
4. Generative AI
Generative AI is powered by large language models that teams train by feeding them publicly available data on the internet. ChatGPT and Bard are two examples of models that learn from online sources like forums and websites to craft logical responses to questions and requests. Because it’s unclear what kind of data these tools have access to, personal information may be used for purposes that consumers don’t approve of and aren’t even aware of.
5. Dark Patterns
Dark patterns remain a major issue for consumers, often tricking them into giving away personal information in subtle ways. Consumers may then reveal personal data that is shared with other third parties without their knowledge. While California and Colorado have enacted laws to regulate dark patterns, there is no federal law that reigns in this questionable UI practice and prioritizes the online privacy of consumers.
Data Privacy Laws to Know
The proliferation of the internet and other data-driven technology has led to a global push in establishing comprehensive data protection frameworks. Governments all over the world have created their own privacy laws.
Companies that fail to comply with the necessary data privacy legislation can face hefty fines, the largest in history being a $5 billion penalty imposed on Facebook (now Meta) in 2019.
U.S. Privacy Laws
Unlike many other countries around the world, the United States does not have any comprehensive data privacy laws at the federal level (at least not yet).
The closest thing the U.S. has to a comprehensive data privacy law at the federal level is the Children’s Online Privacy Protection Act (COPPA), which imposes specific standards for how companies can interact with children under the age of 13 and their data online. Otherwise, the U.S. relies on a kind of “patchwork quilt” of laws to protect its adult residents, as Sirota put it.
All U.S. states, territories and protectorates have data breach laws that require companies to notify users if their data has been exposed or compromised. But only 11 states have more extensive data privacy laws on the books. This number is likely to grow though, as nearly a dozen more states have privacy bills in the works.
California Privacy Regulation Act (CPRA)
California was the first state to enact comprehensive data privacy legislation. Initially this was through the California Consumer Privacy Act (CCPA), which was signed into law in June of 2018. A couple years later, the state passed the CPRA, which essentially replaced and amended several elements of CCPA, expanding privacy protections and increasing regulations on businesses.
In general, the CPRA aims to give California residents more control over their online data and restrict what companies can do with it. It requires that companies allow Californians to opt out of third-party sharing of their information for advertising purposes, and forward those requests to data brokers or sold or shared the information with. It also expanded the meaning of “sensitive personal information” to include not just social security numbers and bank account numbers, but geolocations, political and religious affiliations and biometrics as well.
Due to the nature of the law, CPRA is essentially just as powerful as any federal regulation would be, Gilbert explained. The law protects Californians no matter where in the country they are, but there is no easy way for companies to know someone is a California resident if they have an IP address in a different state. So, in many ways, it’s easier for companies to just treat everyone as if they were a California resident.
“From a technical perspective, you have to treat the most conservative state law as though it’s the law of the land, purely because you can’t differentiate very easily,” Gilbert said. “Companies across the board are treating California as though it’s a federal law.”
Over the years, Virginia, Colorado, Utah and Connecticut have created comprehensive data privacy bills of their own, much of which has been inspired by the CPRA.
International Privacy Laws
Sirota estimates there are about 200 laws around the world pertaining to data privacy in countries ranging from Saudi Arabia to Australia. Below are some of the more prominent ones.
General Data Protection Regulation (GDPR)
The General Data Protection Regulation governs the collection, use, transmission and security of data collected from residents within the 27 countries that make up the European Union. Its many mandates include that organizations must obtain clear and explicit consent from individuals before collecting and processing their data, that individuals have the right to access, rectify and erase their data after it has been collected, and that organizations must report data breaches to relevant authorities and affected individuals within a specific timeframe.
GDPR applies to every company in the world, so long as they target or collect data related to people in the E.U.
It’s the “big dog” of the data privacy world, as Arlo Gilbert, the CEO of data privacy company Osano, put it. “[GDPR] was really the groundbreaking framework upon which all other data privacy laws have been modeled.”
China’s Personal Information Protection Law (PIPL)
Adopted in August of 2021, the PIPL was the first national law comprehensively regulating issues related to data privacy in China. It defines “personal information” as any data that is related to an “identified or identifiable person within the People’s Republic of China,” and excludes anonymized information that can’t be used to identify a specific person. And “sensitive personal information” is personal information that, if disclosed or illegally used, could “cause harm to the security or dignity of a person,” which includes biometric data, religious beliefs and financial accounts.
The PIPL grants people the right to know about, decide on and limit the use of their personal information. Processing sensitive personal information is even more restricted, requiring separate consent from the individual.
Brazil’s General Law for the Protection of Privacy
The General Law for the Protection of Privacy (LGPD in Portuguese) is a federal law in Brazil that is designed to unify the 40 existing laws that regulate the collection and processing of personal data. Similar to GDPR, the LGPD spells out nine fundamental rights granted to all Brazilian residents, including the right to request information about the data an organization collected about them. They are also entitled to an explanation about any automated decision-making carried out by a company as a result of their data that could affect them.
The law applies to any person or entity, including the government, that processes personal data of people living in Brazil, even if the entity processing that data is based outside of Brazil.
“It is a bold, strong privacy framework,” Gilbert said. “Brazil is one of the largest economies in the world, and because of that it will also have a ripple effect outside of Brazil.”
Industry-Specific Privacy Laws
In the U.S., laws and regulations concerning data privacy have also been enacted in response to the needs of a specific industry. We’ve highlighted a few below.
Health Insurance Portability and Accountability Act (HIPAA)
HIPAA was put into place to ensure patient confidentiality for all healthcare-related data. It lays out the privacy and security requirements for the collection, storage and sharing of all protected health information (PHI), or data that is collected about patients during medical visits, which includes everything from a patient’s name to their health plan beneficiary number.
Fair Credit Reporting Act (FCRA)
FCRA regulates the collection and use of people’s credit information. It essentially protects any information gathered by consumer reporting agencies such as credit bureaus, medical information companies and tenant screening services, from being provided to anyone who does not have a purpose specified in the act.
Gramm-Leach-Bliley Act (GLBA)
Named after the three legislators who co-sponsored the act, the GLBA dictates how financial institutions — companies that offer products or services like loans, investment advice or insurance — must deal with individuals’ private information. It regulates the collection and disclosure of private financial information, stipulates that these institutions must implement security programs for this information and prohibits them from accessing private information under false or misleading pretenses.
How Can Users Keep Their Data Private?
On top of the privacy policies and regulations in place created by private companies and governments, individuals can also take action to protect their personal information. In just a few steps, people can maintain security against unwanted attempts to access their data, as well as protect their privacy from entities with which they don’t want to share their information.
1. Create Strong, Unique Passwords
In the last decade, data breaches have struck major companies like Equifax, Facebook and Target and many others. If you have an online account, odds are hackers have leaked data from at least one of them (you can check here). Passwords are often the first line of defense when preventing issues like these, so users should create strong, unique passwords for their online accounts, and use a password manager to keep track of them.
Sirota suggests that people “tier” their passwords, too, where they have very strong passwords for important devices and sites and different, perhaps less strong, passwords for sites that are less important. That way, if there is a data breach at one of those less important or secure sites, it won’t affect the safety of their more critical data.
2. Use Two-Factor Authentication
Whenever possible, users should enable two-step authentication to their online accounts, meaning they log into an account with a password and an additional layer of security, like a number texted to their phone.
3. Regularly Update Software and Apps
Users should make sure they have the most up-to-date operating system on the device and apps they use to patch any security vulnerabilities.
All three major operating systems — Windows, macOS and Chrome OS — can update automatically, but users can also manually enable it in their device’s settings. For third-party software and apps, users may need to find and enable a “check for updates” option in its settings.
4. Be Cautious With What — and Where — You Share
Users should be cautious about what personal information they share online, particularly if it’s on a website they don’t trust. And they should be especially wary of public Wi-Fi, which is more risky to use than private Wi-Fi, and should not perform sensitive activities like online banking while they’re on it.
“Be careful what you share,” Sirota said, adding that organizations tend to “over-collect” information about customers they don’t necessarily need. For instance, a coffee shop doesn’t need to know a person’s birthday when they order for pickup online, but it will certainly ask for it. In cases like that, it’s OK to plug in fake information.
5. Avoid Giving Personal Info to AI Apps
Users should be especially precious about the data they share when generative AI is involved. These systems are black boxes, meaning it is next to impossible to tell exactly how their outputs are affected by the data they’re fed, which makes personal data especially vulnerable. This is doubly true now AI giant OpenAI is allowing companies to customize its language model GPT-3.5 with their own data.
“From a technology perspective, it’s very hard to see how AI could truly become compliant with any of these [privacy] regulations,” Gilbert said. “I caution every company, every individual, you should assume that anything you put into AI right now will eventually end up somewhere else. You should basically assume that you’re posting it publicly.”
6. Actually Read the Permissions and Pop-Ups
Before plugging any personal information into an app or website, users should review their permissions and access requests to make sure it’s a place they’re comfortable sharing with.
Same goes for their privacy policies. User’s should educate themselves on how their data is collected, used and shared.
7. Monitor Your Personal Accounts
Protecting one’s personal data is an ongoing responsibility. That means users should be periodically removing old data on their devices, deleting dead accounts, unsubscribing from unwanted emails or automated messages and strengthening their passwords. Users should also be regularly reviewing their financial and social media accounts, looking out for any suspicious activity.
“It may seem time-consuming,” Edwards said. “But we have to do a little bit more work on our end to make sure we’re doing the best that we can to protect our own personal data.”
Frequently Asked Questions
What are the three elements of data privacy?
Data privacy can be thought of as three main elements: consent, transparency and security. For example, organizations should have a person’s permission when they collect, use and share their personal data, they should be clear about how they do that, and they should ensure that data is properly protected.
Why is data privacy important?
Data privacy is important because it safeguards people’s sensitive information, preventing any unauthorized access, theft or misuse.