How Your Face is being Tracked?
Vignesh Sivaprakasam
Facial recognition systems and analysis softwares are constructed and coded to be biased. The coded gaze or algorithmic bias spreads bias on a massive scale by leading to exclusionary experiences and discriminatory practices throughout society. In an experiment that the presenter conducted, she showed how a webcam detected a white face mask and was not able to detect herself when presented in front of the same camera. Generic facial recognition software uses machine learning techniques to identify facial features. Using facial sets, the software learned to identify specific facial features. However, when the training sets are not diverse, this deviation makes it difficult for the software to identify a face. Algorithmic bias also leads to discriminatory practices as police departments have unregulated and unaudited software of over 117 million Americans. In addition, machine learning is also used with predictive policing through risk scores dictating prison sentences. Buolamwini, a facial recongition researcher, believes that we need to monitor this type of technology because of the inherent bias and lack of human judgement present (Buolamwini 2019).
Inclusive Coding takes into account all people and creates unbiased user results. Inclusive Coding includes who codes, how we code, and why we code. By encouraging people from various backgrounds and fields to code, algorithms are able to be seen in different perspectives. Therefore, the way we code is changed by using more unbiased and open concepts. Finally, by encouraging engineers and programmers to push for social change as a priority leads to more inclusive coding. I encourage society to move towards encoding by identifying bias, curating inclusively, and developing conscientiously. By understanding the biased practices, we should understand the importance of developing inclusive and open practices in order to provide a more space against discrimination and biases. It is critical to make technology inclusive to everyone and at the center of social change rather than a tool cycling bias and discrimination. This is problematic with hyper surveillance technologies because they have been found to create profiles for people directing and controlling these biases that present in our society.
A company that has created hyper surveillance technologies is Clearview AI. Some advantages that Clearview AI possess include security and efficiency. Using the technology, law enforcement is able to detect shoplifters, sex traffickers, child abusers, or homicide cases. The software allows enforcement to a database of over three billion pictures allowing police to identify suspects efficiently. However, there are multiple potential negatives including personal abuse, racial bias, inaccurate results, and data security. With access to millions of images, law enforcement could abuse the technology to identify romantic partners or foreign governments can identify people of social status to blackmail. The availability of information makes it more difficult for individual security. In addition, Clearview AI has been proven to mistaken or misidentify suspects before. The use of facial recognition technologies have biases leading to misclassifications causing wrongful arrests. In fact, the company stated that the tool finds matches 75% of the time. Therefore, there are situations where data is inaccurate. A major concern with Clearview AI and facial recognition systems is the data security. In the past, Clearview AI has been hacked and the client base list has been leaked. There is a need for more security within the system from protection of such information attacks. While hyper surveillance is a growing technology, more security and privacy need to be enabled for it to be used more effectively and appropriately.
References:
Fong, Kelley. 2020. “Getting Eyes in the Home: Child Protective Services Investigations and State Surveillance of Family Life.” American Sociological Review 85 (4): 610–38. doi:10.1177/0003122420938460.
Helberger, Natali, Jisu Huh, George Milne, Joanna Strycharz, and Hari Sundaram. 2020. “Macro and Exogenous Factors in Computational Advertising: Key Issues and New Research Directions.” Journal of Advertising 49 (4): 377–93. doi:10.1080/00913367.2020.1811179.
Odella, Francesca. 2019. “Privacy Awareness and the Networking Generation” University of Trento, 97-123.
Ohm, Paul. 2019. “The Many Revolutions of Carpenter.” Harvard Journal of Technology 32 (2): 357–416.
Puaschunder, Julia. 2020. Towards a Utility Theory of Privacy and Information Sharing and the Introduction of Hyper-Hyperbolic Discounting in the Digital Big Data Age.