Module 3 - Emerging Trends and Issues on Privacy

Introduction

By this time, you are now armed with a basic understanding of the concepts of Privacy and Data Protection, as well as the key elements of the Philippines’s Data Privacy Act. This is crucial for this next discussion, which will delve into the current trends and issues that impact or at least relate to Privacy and Data Protection. It is the objective of this Module to provide you with a general sense of what these issues are all about.

If you will recall, we mentioned at the beginning the relationship between the concepts of Privacy and Data Protection with new technologies. We basically said that the evolution of these concepts will continue alongside the introduction of newer, more advanced technologies.

That idea makes this discussion important, because these are the same things that will influence how we may come to define and redefine Privacy and Data Protection in the future. This will also be the basis of modern laws and policies, many of which are already being developed now.

 

Big Data

The term, Big Data, has been a buzzword for quite some time now. It represents the Information assets characterized by High Volume, Velocity, and Variety, so much as that it requires specific Technology and Analytical Methods for its transformation into Value. It is a term used to describe the application of analytical techniques to search, aggregate, and cross- reference large data sets in order to develop intelligence and insights.

These large data sets can range from publicly available data sets to internal customer datasets held by a particular company in the private sector. They include Twitter feeds, Google searches, and call detail records held by network providers. Obviously, in some cases they involve personal data; in others, they don’t. In many instances, though, it’s both.

Features of Big Data

  1. Volume. Big Data doesn’t sample. It just observes and tracks what happens.

  2. Velocity. Big data is often available in real-time.

  3. Variety. Big Data draws from a multitude of sources: text, images, audio, video; plus, it completes missing pieces through data fusion.

A good example to use would be the trove of information being processed by social media platforms like Facebook. Facebook collects new information essentially every second that goes by. This makes it particularly hard to imagine just how much data it is able to accumulate at any given time. They involve all sorts of data: information its users post, information about users’ reaction to other people’s posts, information about websites visited by users using links posted by other users, and so on. If one looks at Facebook’s system using the 3V’s as standard, it is clear that the company is very much into big data.

Privacy and Data Protection Issues

By now, regulators and researchers have already identified several privacy and data protection issues that continue to hound Big Data. Here are two of them:

  1. Transparency (and the use of Privacy Notices). Transparency is a fundamental principle in data protection. To realize it, organizations are either required or volunteer to put up privacy notices. These documents inform people about the way their personal data are about to be processed by said organizations. This quite difficult to implement when dealing with Big Data, because a lot of uses emanating from it are not obvious or even conceivable at the time they are collected. If you don’t what you could end up using the data for, then you can’t tell people in advance what they are.

  1. Data minimization. Data minimization is part of the broader concept of proportionality, which is another key data protection principle. Its edict is basically that one should only collect the minimum amount of information necessary to achieve a particular objective. This goes against at least two elements of Big Data: Volume and Variety. If you are into Big Data, you want as much information, from as many sources as possible. The bigger the numbers, the more meaningful are the insights and predictions you are able to derive or produce from them.



Internet of Things

Another popular development is this so-called “Internet of Things” or “IoT”. An IoT is basically a network that connects uniquely identifiable “Things” to the Internet. The term IoT itself is said to be coined by one Kevin Ashton back in 1999, long before it became the reality that we see and experience right now. In 2020, the number of IoT devices supposedly reached 50 billion worldwide.

A “Thing” can be any physical object that is relevant from a user or application perspective. It has sensing/actuation and potential programmability capabilities. Through the exploitation of unique identification and sensing, information about the “Thing” can be collected and the state of the “Thing” can be changed from anywhere, anytime, by anything.

The most common example would be all these “smart” devices that we now have: smart watches, smart TVs, smart refs, smart speakers, etc.

Let’s consider Smart TVs, for a moment. They are wired to the internet, which then makes it possible for us to watch through streaming services like Netflix, Disney+, and HBO Go. A Smart TV is usually accompanied by a special remote—often fondly called a “magic remote”—that is able to catch voice commands. This is how the Smart TV “senses” the user. It listens to the voice, tries to understand what it’s saying, and performs a task based on its interpretation of the user’s instructions.

Smart Speakers and Smart Toys typically have the same capability.

Privacy and Data Protection Issues

  1. Transparency. Transparency is a recurring issue when these modern trends are involved, and it’s no different with IoTs. Basically, users are frequently unaware of the full range of functions and capabilities of their devices. These past few years, there have been at least two incidents where this problem was showcased. One involved smart TVs and the other a Smart Toy. With the TVs, many owners were at first unaware that while their TVs’ voice-recognition function was turned on, it was always recording every sound it could capture from its surroundings, including people’s intimate conversations. The collected data are then transmitted via the internet to the manufacturer and other third parties. This was essentially the same case with the Smart Toy incident. Parents did not know that the doll they had given to their child was recording her voice commands and sending it to the doll-maker and its service providers.

  1. Security. IoTs are notorious for their poor security. Unlike smart phones, tablets, and the standard personal computers, most other smart devices have very basic designs and builds. They are made with only 1 objective in mind: to perform a specific function. Security is either not a priority, or, in some cases, not considered at all. This makes them extremely vulnerable to hacking. This explains why there have already been reports of strangers talking to babies via unsecure baby monitors. There have also been instances where thousands of smart devices were inappropriately used to commit a DDOS attack against certain websites. In the case of the latter, it was discovered that a hacker had created a computer virus, which then spread across a wide network of smart devices, allowing the hacker to gain control over them.



Artificial Intelligence

Artificial intelligence or AI is a term used to describe the capacity of systems or machines to mimic cognitive human functions namely: (a) generalized learning or the ability to perform well and adapt in new or unknown environment; (b) reasoning or to generate conclusions appropriate to the situation; and (c) problem solving or finding desirable solutions to the problems presented.

It has two main types: narrow and general. What we are seeing today are essentially examples of Narrow AI. We have machines capable of learning or of being taught to perform defined tasks. They cannot, however, learn beyond what they were programmed to do. Virtual assistants like Siri and Alexa, automated customer service or chatbots, and self-driving cars are some of its examples. 

Examples of general AI, on the other hand, still restricted to those we see in movies like “The Terminator” and “Star Wars”, where machines already think like humans, are capable of performing complex tasks, and even manage to learn from them. Some can even develop and express emotions. Their outputs, just like humans, are also unpredictable. As of today, however, nobody has developed a general AI machine.

Privacy and Data Protection Issues

  1. Bias and Discrimination. So far, the biggest potential drawback associated with AI is its ability to reflect the biases of its developer. This often leads to discriminatory actions, despite there being no humans anymore behind the decisions being made. Some say this actually worse because discrimination becomes automated. This, in turn, could mean more people becoming victims at a much fater rate.

  1. Transparency. A related issue is, once again, transparency. One possible solution that could prevent bias from seeping into AI is to have the algorithms that serve as its heart or core vetted properly and regularly, preferably by independent third parties. These algorithms are the ones that make it possible for AI systems and machines to learn, adapt, and make decisions on their own. Unfortunately, developers tend to treat them as propriety information or assets that should be kept secret or confidential. This essentially makes independent checks or assessments impossible.



Surveillance

Surveillance is the monitoring of behavior, activities, or other changing information for the purpose of influencing, managing, directing, or protecting people. It is the fourth and probably the widest in terms of application among these trends. Unlike before when this type of activity is usually reserved for government agencies, particularly those in law enforcement and the armed forces, today, even the private sector is getting in on the action. If national security and law enforcement is the government’s usual reasons, the private companies resort to surveillance to control, influence, or restrict the actions of individuals.

Another notable development when it comes to surveillance is the significant improvement in capabilities that today’s surveillance technologies have gone through. The impact of traditional surveillance tactics have somewhat been tempered by laws introduced through the years, like laws against illegal wiretapping. The world today, though, is very much different. In the field of communications, for instance, people have far more means and options to communicate with one another. Surveillance or monitoring of these other options are often left unregulated. They’re not covered by existing anti-wiretapping policies.

Here is a list of surveillance trends and issues that privacy and data protection advocates are having to deal with today.

Types of Surveillance and their inherent

Privacy and Data Protection Issues

  1. Mass surveillance. It is the subjection of an entire population or a significant component group to indiscriminate monitoring. There is no one individual that is being targeted. Its very nature makes it a systematic interference with the individual right to privacy. Any system that generates and collects data on individuals without attempting to limit the dataset to well-defined targeted individuals is a form of mass surveillance. Edward Snowden’s story several years ago involved the disclosure of a massive surveillance network, led by the United States, and participated in by four other allied countries. Together, they are referred to as the “Five Eyes”.

  1. Communications surveillance. It is the monitoring, interception, collection, preservation and retention of information that has been communicated, relayed or generated over communications networks to a group of recipients by a third party. This third party could be a law enforcement agency, intelligence agency, a private company, or a malicious actor. Communications surveillance does not require a human to read the intercepted communication, as any automated action of communications surveillance represents an interference with the right to privacy. As noted earlier, most legal protections against communication surveillance are ineffective against newer methods of interception that target emails, videocalls, chat, and even phone calls over mobile telephony.

  1. Internet monitoring. It is the act of capturing data as it travels across the internet towards its intended destination. The infrastructure that supports the internet involves physical infrastructure and electronic systems to connect the world. Internet monitoring can take place across any point of the infrastructure, depending on what information is trying to be collected. Both the public and private sectors are capable of engaging in this type of surveillance. For tech companies like Google and Facebook, they would like to know what people do online to determine what ads to display or show them. Some companies sell or let others use their gathered data for a fee, or as a business. Those “cookies” we keep hearing of are part of this surveillance framework. They let companies track our online activities.

  1. Social media surveillance or Social media intelligence (SOCMINT). It refers to the techniques and technologies that allow companies or governments to monitor social media networking sites (SNSs), like Facebook or Twitter. They include the monitoring of content, such as messages or images posted, and other data, which are generated when someone uses a social media networking site. This information involves person-to-person, person-to-group, group-to-group, and includes interactions that are private and public. In a way, SOCMINT is a lot like internet monitoring except that it is limited to people’s activities while in social media platforms. At the end of the day, though, they have a similar objective, which is to profile people and to use such profiles for marketing purposes.

  1. Video surveillance. It involves the deployment of technologies (e.g., CCTVs) in public and private areas for monitoring purposes. A CCTV is a connected network of stationary and mobile video cameras and is increasingly used in public areas, private businesses and public institutions such as schools and hospitals. Systems incorporating video surveillance technologies have far greater powers than simply what the camera sees. Biometric technologies use the transmitted video to profile, sort and identify populations through facial recognition software. While these systems are often justified by claims that they prevent crimes, there have been few studies that support this. They can help solve some crimes, but these are usually petty crimes and are different from the graver offenses they are primarily intended for, like terrorism. They can also become a national security issue when foreign companies are involved in their implementation. Here in the Philippines, there is one such case. The current “Safe Philippines” Project, which plans to install thousands of CCTC cameras in the capital and in the hometown of the President, will be implemented, together with the assistance of a Chinese firm. Risks associated with CCTV systems increase when they are also equipped with facial recognition.

  1. Biometrics. Biometric technologies capture and store the physiological and behavioral characteristics of individuals. They are usually employed through government ID systems like the Philippines’s PhilSys. In the private sector, companies are also able to collect biometric information through products where fingerprint and facial recognition technologies are embedded. Characteristics may include voice and facial identifiers, iris patterns, DNA profiles and fingerprints. When stored in a database these characteristics can be paired to individuals for later identification and verification. When adopted in the absence of strong legal frameworks and strict safeguards, biometric technologies pose grave threats to privacy and personal security, as their application can be broadened to facilitate discrimination, social sorting and mass surveillance, and the varying accuracy of the technology can lead to misidentification, fraud and civic exclusion.



One could consider these four as a sneak peek into the range and constantly-increasing number of issues that relate to privacy and data protection. They present varying types and degrees of risks; thereby, requiring different strategies or tactics as solutions. Some are more dangerous than others. Others are easier addressed than their peers.

This highlights once again the importance of an effective implementation of the Data Privacy Act and other relevant laws. If done properly, it should give criminals and other bad actors pause before they decide to commit violations and cause harm upon others.

Of course, individuals have some options too in terms of protecting themselves from the dangers posed by these issues. People are not completely helpless. The things one can do to improve one’s defenses and chances against all these potential risks will be discussed in the next Module.