Data and the Impact on Privacy in the Modern Age
Not everyone is aware of how much data is associated with them on a daily basis. Internet searches, location tags, photos, messages, data transfers, emails, etc. They all create a picture of the type of person you are, and there are whole schools of thought on how this picture can be taken, or even manipulated.
Personal data is a more general term for what international regulations deem as any info relating to an individual. When you look at it like that, it’s more around names, addresses, photos and contact details that can be used to identify someone.
This type of data holds some level of protection across the globe due to the fact it can be used to identify you, which obviously poses some level of risk to your personal privacy.
Data that doesn’t identify you directly sits outside of that protection, meaning it can be used for a variety of purposes. Surveys that you fill out to enter a competition, or from survey sites, is most commonly used to help advertisers and marketing companies tailor their campaigns towards certain demographics. They can’t target you specifically, so they take generic data about age groups, genders or ethnicities and shape their marketing based on the responses from those groups.
There are a number of uses for your more generic data – medical studies, political affiliations or general government statistics, to name a few. The trouble of this is when data that should be anonymous can be used to re-identify the end user.
As recently as 2015, the governing regulation for this type of data in Europe was from 1995. This was amended in 2016, but when you look at the development and growth of the internet (and more importantly, how much data is shared on the internet), this 20 year gap is a huge red flag.
Big Data’s role in the privacy discussion
Big Data isn’t necessarily one group of people stealing everyone’s info. In fact, the majority of information that ‘Big Data’ has, was given up voluntarily with little thought to how it may be used. Big Data is simply the large amount of data out there that can be accessed by a lot of people.
And with a lot of people able to access the data, when can this data be abused? The algorithms that are used to analyse data usually still come from human resources, so is it possible for individual prejudices to be carried over? Could we see consequences for people of different ethnicities, gender, income brackets or sexual orientation when it comes to thing like home loans, employment, insurance, etc?
Is Big Data safe?
Cathy O’Neil, author of ‘Weapons of Math Destruction’, shone a light on the fact that Big Data firms can be more concerned with generating revenue and turning a profit than they are with the possible ethical boundaries of data. You see this manifest itself in many ways, the most common being the American tradition of lobbying for legal loopholes.
As with anything, there are always exceptions, and Big Data is no different. We’re seeing some firms realise the responsibility they have to those whose data they hold and are looking for new ways to protect that data. Big names like Microsoft and Apple are at the forefront of these advances and have looked to implement rigorous filters, to reduce the total amount of info collected – meaning it can then be requested by the authorities at any time.
This is something that has arisen quickly and recently – prompting a lot of discussion about the balance between the Administration and personal data. In a COVID locked world where personal data is being used to control and track infections, this is a discussion that is more important than ever.